From Amazon’s Alexa to Tesla’s smart cars, mobile devices and online banking systems, the prevalence of artificial intelligence (AI) in daily life is clear. At an enterprise level, vendors aplenty promote the capabilities of such systems to provide “solutions”, whether bespoke or commoditised; they promise efficiencies and speed, automating the dull and routine and leaving us happy humans to concentrate on activities where some form of emotional interaction supposedly retains value.

The latest developments in AI application naturally grab media attention, but while the enthusiasm to implement some such system into your business may be well intentioned, don’t forget that the provision of such services is at its heart no more than the delivery of another software product, set of services or blend of the two. The relationship between customer and vendor still needs to be sensibly considered on an arm’s-length basis, even if ultimately the risk analysis decides that legal negatives are outweighed by the commercial benefits of promised functionality. So how do you know what you are getting into? Five key issues to the relationship are highlighted below.

1. Pre-engagement promises?

Irrespective of its formality, the procurement process through which the commercial decision to invest is made will almost certainly have generated stakeholder expectations as to what benefits the system will deliver. As obvious as this sounds, does the purchase contract and, in particular, the service description, capture the essence of these expectations? If not, you risk being at the mercy of the vendor in terms of what the product ends up delivering; at worst, the project becomes simply one of vendor product development at your expense.

2. Working at the relationship – keeping an eye on implementation and performance

Unless the system is entirely commoditised, it will require some form of implementation (customisation) service and certainly an interface to work with your existing platform. What is the nature of the project? Is it merely the purchase of consultancy time to see if the product can be configured to do as expected, or is there a guarantee that it will work as specified? Further, how will it perform once implemented? Metrics are easier to specify for a commoditised product but, if the AI is meant to have certain capabilities, are these specified as a minimum or are you content that it will just “teach itself” and that everything will turn out alright in the end? Finally, once an operational phase is reached, what support and maintenance undertakings are offered by the vendor and at what cost?

3. What’s yours is mine – data ownership

Data ownership is now a key focus point for almost all (technology) contracts, and while few software vendors will attempt to lay an absolute claim to ownership of customer data, they may seek access to anonymised or aggregated data for the purposes of testing, verification or future development. You should ensure that such a request fits comfortably with your data collection and ownership model.

4. For better or worse – taking the blame

Without being deliberately Eeyore-ish about it, the question of liability is as important here as anywhere else. The delivery of “Artificial Intelligence” capability should not just be viewed as a wholly new product which comes with its own set of “must-accept”, vendor-friendly rules simply because of what “it” is. Inevitably vendors will attempt to dictate the rules, and there is nothing new in that. Vendors entirely naturally wish to avoid liability as far as possible. This imbalance is incrementally corrected over time through increased competition, but it does not stop customers from being inevitably pushed to accept that the scale and investment of providers is a winning argument as to why liability terms which might ordinarily not seem acceptable, should now be accepted. As always, just ask yourself the question, if it all goes wrong and you get little more than your money back and walk away, is that really going to be enough for you? And what mess will you be left behind to clean up?

5. Avoid the messy divorce – enforcement issues

The tendency for IT specialists to belong to the global community, rather than simply a local one, means that many AI vendors are based outside the UK. The clear impact of this is that in addition to the contract containing provisions which may not be to your liking, the additional hurdle exists of enforcement through a foreign court, using a foreign legal system with all its attendant expense, stress and distraction. Ultimately, does this mean you have no effective remedy?

Provided buyers have a clear view of the intended strategic benefits, the opportunities of AI deployed in the right way are plain for all to see, but like any contract, we must not be unduly swayed by shiny new toys into accepting terms which do not suit the business context.

For further information please contact:

This article is for general information purposes only and does not constitute legal or professional advice. It should not be used as a substitute for legal advice relating to your particular circumstances. Please note that the law may have changed since the date of this article.