Can Engineers Benefit From an AI Use Policy?

Oct 15, 2024

Can Engineers Benefit From an AI Use Policy?

According to Microsoft and MIT Technology Review, around 58 percent of engineering and design executives expect to increase their AI spending by more than ten percent over the next two years. It’s one thing investing in this technology, but engineers must put the necessary support in place for staff wishing to use these tools. Engineering managers should start by developing an AI use policy.

Since ChatGPT launched in 2022, we’ve seen an explosion of AI tools for business and personal use. It’s no different in engineering. For years, the industry has been at the forefront of AI-related technology, adopting everything from machine vision to generative design. Now, the sector is facing disruption and transformation due to chatbots and other platforms. 

Last year, Deloitte’s "State of Ethics and Trust in Technology" report found that three in four companies (74 percent) had already started testing generative AI technologies, and two in three (65 percent) said they were already using them internally. For example, the report suggests that generative AI can assist engineers and product leaders in running experiments and tests through pilot programs. These pilots allow design engineers to explore different use cases, which can drive innovation in product development and reduce the risks associated with implementing AI. 

Related:How Aviation Manufacturers Are Using AI

Additionally, Generative AI tools can aid software programming (such as code generation and optimization, bug detection and documentation) and automate routine tasks, freeing up engineers to focus on higher-level activities, such as more creative and complex design work. Engineering and manufacturing team managers must ensure they develop policies and procedures in order for their teams to benefit from this technology, while minimizing any risks to their companies.

Watch out for shadow AI

One risk is that AI could be put into use without corporate approval; such unsanctioned use could be referred to as shadow AI. Unauthorized AI implementation without any controls could pose data breach risks and could affect product or process quality or consistency. It could even violate industry standards like ISO 27001 for cybersecurity and ISO 9001 for quality management.

Engineering and manufacturing leaders must determine what AI use is in the best interests of the company and educate their teams accordingly. Corporate policies and guidelines can then be outlined in an AI use policy.

Such AI use policies help companies, including those with engineering and manufacturing teams, ensure that any AI technology is deployed safely, reliably, and appropriately. Crucially, it also helps minimize risk and the possibility of these platforms being misused. 

Related:AI Makes Slow but Sure Progress in Manufacturing

We suggest some rules for using AI in the workplace, and here we answer a few questions to help engineering teams and management get started. 

What are the purpose and scope of an AI use policy? 

In an AI use policy, engineering managers should define the overall context, purpose, and scope of accepted AI use, including what engineering tasks and team members must follow the policy. In addition, managers can cite any other company policies that may be impacted. 

Importantly, the policy should also specify the functions for which engineering AI tools could be used. Does it focus primarily on generative AI and its creation of content, such as text, images, and audio? Could AI be used to design products, subassemblies, or components? Could AI be used to collect and analyze engineering, testing, or manufacturing data and suggest further actions based on that collected data? Are there other tools that engineers and their colleagues might need to use? There may be additional ‘smart’ technologies in the organization, such as data analysis platforms. 

What is the process for approving AI engineering tool use?

Related:MathWorks Exec Expects AI to Mature in 2024

Engineering managers should develop evaluation criteria for determining whether AI tools can adhere to company requirements and expectations for engineering and manufacturing procedures. They should then outline how those AI tools would be evaluated and vetted. 

The policy should also list any pre-approved AI tools. These may include OpenAI’s ChatGPT and Google Gemini as well as other tools based on these such as Microsoft Edge’s Copilot. 

What are rules of AI use for engineers?

Professionals working in engineering and production facilities should set technical inputs and outputs for AI programs. Such details would set out plans to ensure compliance with data security, privacy, and ethical standards. Permissible inputs and outputs should be clearly stated—input covers information given to the AI model and includes material properties, machine parameters/configurations, or historical manufacturing process data. Output is what is then produced by the model and could range from 3D models or CAD drawings to predictive maintenance schedules or machine adjustments. 

It is also important to develop rules that would communicate AI use to other parties. 

Moving forward with an AI use policy

With more and more engineering departments adopting AI tools and bringing these into their workflow, it’s important that engineering managers protect themselves and their staff and operate within legal and regulatory boundaries. 

For advice and support on drafting AI use policies, visit www.arbor.law.