Generative AI: An aviation panacea?

Last updated:
Aug 14, 2023

Generative AI (GenAI) is regaling the society of business leaders, researchers and experts with tales of innovation, doom and gloom, and sceptical caution.

Whilst GenAI offers unique opportunities for assisted decision making and efficiency, it is also at risk of abuse and direct threats, particularly in the case of cyber threats.

Nowhere is this more true than in aviation, which has to contend with a number of challenging trends facing airlines and operators: increased air traffic, environmental standards (and push to net zero), greater competitiveness in the industry, and a complex threat environment both from physical threats and cyber actors.

As cliché as is custom in any article on a piece of technology, GenAI could help with a number of these challenges.

For example, it could assist in aircraft design in the shape of surrogate modelling, wherein complex models are simplified by GenAI in order to reduce computational time. Virtual assistance could be used in the cockpit to assist pilots (for example Thales’ FlytX tool). Predictive maintenance could also benefit from incorporating wider datasets to advise more accurately when aircraft components need repairs or replacing.

Finally, threat modelling and risk management is another area GenAI is having an impact, with firms such as Osprey incorporating it into their threat intelligence products. From a sustainability point of view, aviation could benefit from more accurate analysis of data sets incorporated into compliance to advise on ESG projects and targets.

In essence, GenAI could speed up decision making and build efficiency into the industry as a whole.

However, aviation, like many critical industries, are subject to a large number of safety standards, regulations, and threats. The more digitally native airlines become, arguably the greater risk from cyber threats and interference. From a business continuity point of view, digitally native infrastructure, whilst more computationally efficient, runs the risk of lacking well-understood manual processes in the event of a cyber incident.

If staff have to revert to manual processing, how well equipped and skilled are they to do so? If AI tools are offline, are staff capable of operating without them?

GenAI cyber threats create risk in two ways: first, attackers may look to target a GenAI application in a business to either take it offline, exfiltrate or tamper with data, or second, they may use a GenAI application themselves to help write more effective form of malware that supports other attack vectors or methods of attack.

Whilst the latter is an issue at the heart of the role GenAI plays in society and business, the former is highly relevant to airlines and other businesses in aviation.

The impact of these cyber threats on airlines is entirely dependent on how and where GenAI is present in the business model and for what reasons. If GenAI is embedded as a tool to manage customer Personally Identifiable Information (PII) or Payment Card Information (PCI) the risk notably increases from a data security perspective. However, if GenAI embedded in the pre-flight checks or aircraft maintenance is tampered with, there are significant physical safety risks.

This is where appropriate pre-implementation data impact assessments and subsequent routine risk assessments are critical to the use of GenAI in aviation.

The data impact assessment needs to outline the inputs, outputs, ownership and accountability of the application, whilst ultimately drives the required cyber security controls. Defining what the application is expected to achieve and how it will be used and trained will help to understand further risks around misuse, misalignment (hallucinations) from the datasets, and governance requirements.

Furthermore, outlining any third party liability – either as the third party or for the vendor – is a critical legal step. Once this data impact assessment is completed, the cyber security controls implemented around the datasets used to train the model and the application itself should centre around access control management, privileged access, advanced monitoring capabilities, and endpoint security for any endpoints using the application.

Once implemented, businesses should stand up an internal oversight body (either working group or committee) to conduct frequent risk assessments to determine if either the threat or exposure has shifted.

Whilst securing the GenAI tool is a critical aspect of risk management in aviation, there are also significant tech-adjacent risk management tasks:

  1. Compliance will be of particular focus in any GenAI tool used for critical services. Some examples of the increasing legislative and regulatory environment for managing AI models include the EU’s AI Act and the specific European Union Aviation Safety Agency AI Roadmap 2.0 which includes a voluntary ‘AI Pact’.
  2. Governance within the business is critical. Businesses should immediately update internal policies to reflect the new use of data. Airlines and other businesses in aviation can also get a step ahead of governance issues by working with regulators in aviation and researchers to build realistic frameworks.
  3. Data ethics are a major concern for AI and GenAI. Therefore, businesses will need to run regular audits on datasets for any ethical anomalies and carefully select training data. The training data in particular is a focus of regulators.
  4. A number of standards around AI data quality, assurance, risk management are under development as part of wider Enterprise Risk Management.

With these risks in mind, airlines and other businesses in the aviation industry must weigh up the benefits and drawbacks of where and how GenAI could be embedded into systems and processes.

Ultimately, this is a question of balancing multiple factors including potential risk, the cost of effective risk management and compliance, the cost of implementation and hosting, the investment in time to implement, and the limitations of the product’s scale. The results from this balancing act will determine the utility GenAI has in the aviation industry.

If you’re interested in listening to more on the role of GenAI in aviation, please listen to the recent webinar with Osprey’s CEO Andrew Nicholson.

AI Bulletin: August 2023

July was a busy month for the shifting AI landscape and here are a few interesting events:

The White House announced that AI companies have pledged voluntarily to implement water-marking AI generated content and other measures to give users a clearer view of where the information came from.

This raises a few questions including whether or not we (as consumers) should be regarding the information from GenAI models as ‘less’ trustworthy? Nonetheless, the US has lagged behind the EU on this, who are currently drafting rules for GenAI, including distinguishing between deep-fakes and real images. It should be noted that there is scepticism of AI-detectors having the capability to accurately detect when text is derived from a GenAI tool. Read more on the White House announcement here.

Google have implemented a red team that specialises in testing AI applications and systems.

Their AI Red Team will replicate common tactics, techniques and procedures by threat actors that include ‘prompt attacks, training data extraction, backdooring the model, adversarial examples, data poisoning and exfiltration’. This is a good step toward vendors providing adequate pre-deployment testing and on-going maintenance. Read more about Google’s AI Red Team here.

Startup Protect AI raised $35 million in Series A funding round.

Protect AI provide a number of tools include an AI Radar which delivers visibility into training data and testing. They also have tools to mitigate ‘types of AI attacks’. This is one AI startup worth watching as they try to ‘de-risk’ AI. Read more on Protect AI’s Series A funding round here.

In the race to create as many GenAI solutions as possible, Dell have teamed with illustrious Nvidia on ‘Project Helix’.

The companies are delivering validated designs for inference systems based on NVIDIA accelerators and software, a professional services offering to help enterprises embrace generative AI, and a new Dell Precision workstation for AI development. Read more about this exciting collaboration here.

What's inside?

Stay a step ahead in an increasingly complex and unpredictable world

Our consultants stay on top of the latest megatrends that influence how organisations are attacked, whether related to terrorism, criminality, war or cyber.

We document their analysis here. Be the first to see it.

Subscribe