Protecting Technology and IP in the Age of AI and Collaborative Technology Development
July 5, 2019
By Bhupinder Randhawa and Cameron Gale
Technology evolves at an ever faster pace. As it evolves, the underlying engineering approaches to technology development and innovation also change. Intellectual property and other legal tools for technology protection and transactions must keep up with both new innovations and new development paradigms.
Artificial intelligence is transforming almost every area of commerce and human activity. However, AI remains in its infancy as a commercially viable technology. Most current AI technology fits into the machine learning category, in which advanced algorithms and statistical models are used to identify patterns in large data sets. These models and patterns are used to create predictive systems that evaluate and respond to real world events. Current examples are automated e-mail filtering systems, smart watches that can identify heart conditions, and self-driving cars.
While the end result is easy to summarize, the development work required to build commercially useful AI tools is complex and requires a wide range of technical and commercial knowledge along with immense, diverse data sets. As a result, AI development is more collaborative than many previous technologies. Different innovators (companies and individuals) must combine their expertise and assets for systems to be developed effectively. For example, an aircraft manufacturer trying to identify parts at risk of failure may have decades of data from previous inspections and assessments. However, with little or no in-house AI expertise they must use an AI analytics company to help build a predictive tool.
In an AI collaboration, data owners should consider the purpose for which data was collected and the quality of that data. Often, data was collected inconsistently over many years, by different people using different procedures and for varying purposes. Such data will often be incomplete, and contain errors and inconsistencies. When the data is used to train a machine learning system, data deficiencies can (and generally do) affect the accuracy of predictions made by the resulting predictive system. Culling or cleaning up the initial data set can improve the quality of the machine learning system, although this can take time and be quite costly. Data may also exhibit bias. For example, job application and hiring data may reveal biases against particular groups of applicants. A machine learning system trained with biased data will tend to exhibit the same bias. There has perhaps never been a technology that so completely exemplifies the maxim of “Garbage In, Garbage Out” as machine learning.
Machine learning systems usually produce better results when trained with diverse data. For example, a group of aircraft manufacturers may collectively build a better system to predict part failure if the training data is collected from all the manufacturers. This type of cross-industry cooperation is growing and introduces additional legal issues. Parties must consider what right each of them will have to own and use the resulting system, what right they may have to independently improve the system, and whether improvements must be shared or can be kept proprietary. Parties must also consider privacy and confidentiality issues around their data. For example, if a machine learning project reveals bias in a data set or a flawed safety inspection program by one company, what rights and obligations does each collaborator have to keep that information confidential.
When collaborating, data owners and algorithm experts must carefully evaluate the risks associated with their assets and the objectives of a project to determine what representations and warranties they can give. It may make sense to set up a staged product development cycle with an early opportunity to evaluate a proof-of-concept or prototype system. If the project is not likely to succeed, it can be redesigned or stopped without further expense.
Some collaborations will produce valuable IP, which is often best protected with a patent. The patent system rewards inventors who file applications early with a comprehensive description of their invention in various configurations. This description should also look forward at least 3-5 years to cover future uses of the invention. A superficial or narrow technical disclosure may fail to provide strong patent rights. Worse still, most patent applications are published regardless of whether a patent is granted. A weak patent application may teach competitors about a company’s technology without providing support for a meaningful patent.
For some innovations, a patent may not be practical either because the legal standard for a patentable invention is not likely to be met, the inventor is unable to properly describe the invention or a patent will not provide sufficient protection from competitors. It may be better to protect such innovations as trade secrets. Collaborators should determine upfront who will be responsible for determining protection strategies and implementing those decisions. If an innovation is to be patented, the responsible party should be required to ensure that a quality patent application is filed and prosecuted in all appropriate jurisdictions, taking the requirements of all collaborators into account.
Legal and IP challenges in collaborative AI development will continue to grow as commercial AI evolves beyond machine learning. Companies who make commercial products that use these technologies effectively will see great rewards, but they will have to navigate many legal landmines along the way.
A version of this article was first published in Lexpert® Magazine, June 2019 issue, a HAB Press Ltd publication. Please visit www.lexpert.ca
Content shared on Bereskin & Parr’s website is for information purposes only. It should not be taken as legal or professional advice. To obtain such advice, please contact a Bereskin & Parr LLP professional. We will be pleased to help you.