top of page
Synoptix Attend techUK Digital Ethics Summit

Synoptix Attend techUK Digital Ethics Summit

After attending the techUK Digital Ethics Summit last week, our Technical Innovation Manager, Callum Cockburn, reflects on Feryal Clark MP’s (Parliamentary Under-Secretary of State for AI and Digital Government, DSIT) keynote speech, incorporating other comments from throughout the rest of the summit. This event offered a valuable opportunity to connect with peers and industry leaders, as Synoptix continues to utilise our AI Practitioners, Cybersecurity Experts, and Systems Engineers to translate our experience from high-assurance industries towards AI Assurance and Governance.

 

Headline Takeaways 

  • The Government has a clear desire to drive an innovative AI sector that will scale and win globally – though it is unclear how they will overcome some of the significant geopolitical and commercial challenges with doing so. 

  • They intend to set out an AI Opportunities Action Plan that will set out how they intend to secure three critical dependencies: infrastructure, talent, and data access. 

  • They intend to formalise the AI Safety Institute in law as a key component of their plan for Responsible AI – aiming towards a goal of AI systems being used Safely, Ethically, and Responsibly. The legislation will also contain a highly targeted intervention for high-grade frontier AI models (formalising previously voluntary commitments into mandatory ones)  


Government Intentions 

It is very clear that the Government intends to continue to advance AI systems across Government, as well as pushing the advancement of the private sector to adopt AI solutions to improve productivity. The consolidation of cross-Government Digital Transformation under DSIT serves to concentrate Digital, Data, and Technology (DDaT) skillsets from across Government with the goal of reducing the burden placed by Government services upon people. This was echoed by Clark’s parliamentary colleague (Andrew Pakes MP, Labour MP for Peterborough), who raised the huge cross-party caucus that is highly excited about the potential for technological transformation.  


Mrs Clark also set out the intention of the Government to develop an AI Essentials toolkit that will sit on the AI Assurance platform, building on similar approaches in the Cybersecurity domain. The first tool in this toolkit has been published recently for consultation – AI Management Essentials (a self-assessment tool for business). They are considering embedding the use of this toolkit in Government procurement processes – something that Synoptix would strongly support. However, Synoptix also has responded to the consultation, raising some concerns about areas that are not currently fully captured by the tool:  

  • Confusion between responding as a developer of AI systems, or as a user of AI systems (or both); 

  • Lack of clarity over good Risk Management practices, and how organisations can ensure that they are fairly evaluating the real-world risks of AI systems; 

  • Lack of definition of what constitutes Suitably Qualified and Experienced Personnel (SQEP) for evaluating risks and impacts of AI systems. 


The Nature of Regulation 

Mrs Clark set out both the societal and economic cases for an AI Assurance Ecosystem, which she framed as the key underpinning for a feasible regulatory landscape: 

  • Societal: the Minister highlighted the balance between the potential benefits of AI applied to public sector improvements, bringing great improvement to how the state interacts with citizens, but contrasts it against the risk of bias/discrimination, particularly when existing inequalities are present in vulnerable groups.  

  • Economic: the Minister raised the potential economic benefit both from direct AI activity, but also economic activity created by a flourishing AI Assurance ecosystem in its own right.  


There is no doubt that these points are valid, but it does also seem to fail to capture the scale of the problem. A number of speakers throughout the rest of the day highlighted challenges including: 

  • Challenges with developing pragmatic regulation – contrasting EU approaches with Japan and Singapore - Jeff Bullwinkel (VP and Deputy General Counsel, Microsoft EMEA); 

  • Existing assurance techniques (like audits) are still immature for AI systems, and there is a lack of agreement about what constitutes a “proper” audit for AI systems - Melissa Heikkilä (Senior Reporter, MIT Technology review). 


Synoptix also believes that there is a significant risk of risk-siloing through current assurance approaches. The Government’s principle-based approach to AI assurance regulation runs the risk of placing less emphasis on cross-cutting risk, especially if multiple teams within an organisation are involved in developing risk assessments and treatment strategies.  


Non-AI Transformation and Ethical Challenges 

Clark ended her speech by raising an important point that is too often missed in the current era of AI hype, which is that of non-AI elements to transformation. For example, she highlighted two important Government initiatives that will make meaningful difference to citizens’ interactions with government services: 

  • Gov.UK One – a Single Sign On service for government services, aimed to try and replace the current up to 190 different accounts, with 44 different sign-in methods, that are used to access Government services online; 

  • Gov.UK Forms – replacing outdated paper forms with standardised, convenient, and secure digital forms. Up until November 2024, over 87 forms have been created, with 165,000 submissions with estimated 3 years’ worth of time saved for form processors.  


Synoptix believes that it is crucial that we retain focus on system digital transformation, rather than pure implementation of currently popular technologies without reference to the problems that need to be solved. It is great to see that this is highlighted by the Government – without resolving “basic” digital blockers and implementing digital enablers, the potential from the integration of advanced technologies will continue to not achieve the lauded ROI as promised. 

 

Synoptix Continuing Investment 

Synoptix continues to invest into applying our experience with AI, systems engineering, and cybersecurity from high-assurance domains into AI Assurance and Governance. We are utilising our links with world-leading research institutions, such as Kings College London, University of Bristol, University of the West of England, Loughborough University and others to deliver both fundamental and applied research and innovation across the artificial intelligence domain space. Multi-disciplinary events, such as the Digital Ethics Summit from techUK, are vital to deliver real and impactful cross-sector change, and Synoptix is committed to further development in this area.


 

 

  Images courtesy of @TechUK





Synoptix Attend techUK Digital Ethics Summit

After attending the techUK Digital Ethics Summit last week, our Technical Innovation Manager, Callum Cockburn, reflects on Feryal Clark MP’s (Parliamentary Under-Secretary of State for AI and Digital Government, DSIT) keynote speech, incorporating other comments from throughout the rest of the summit. This event offered a valuable opportunity to connect with peers and industry leaders, as Synoptix continues to utilise our AI Practitioners, Cybersecurity Experts, and Systems Engineers to translate our experience from high-assurance industries towards AI Assurance and Governance.

 

Headline Takeaways 

  • The Government has a clear desire to drive an innovative AI sector that will scale and win globally – though it is unclear how they will overcome some of the significant geopolitical and commercial challenges with doing so. 

  • They intend to set out an AI Opportunities Action Plan that will set out how they intend to secure three critical dependencies: infrastructure, talent, and data access. 

  • They intend to formalise the AI Safety Institute in law as a key component of their plan for Responsible AI – aiming towards a goal of AI systems being used Safely, Ethically, and Responsibly. The legislation will also contain a highly targeted intervention for high-grade frontier AI models (formalising previously voluntary commitments into mandatory ones)  


Government Intentions 

It is very clear that the Government intends to continue to advance AI systems across Government, as well as pushing the advancement of the private sector to adopt AI solutions to improve productivity. The consolidation of cross-Government Digital Transformation under DSIT serves to concentrate Digital, Data, and Technology (DDaT) skillsets from across Government with the goal of reducing the burden placed by Government services upon people. This was echoed by Clark’s parliamentary colleague (Andrew Pakes MP, Labour MP for Peterborough), who raised the huge cross-party caucus that is highly excited about the potential for technological transformation.  


Mrs Clark also set out the intention of the Government to develop an AI Essentials toolkit that will sit on the AI Assurance platform, building on similar approaches in the Cybersecurity domain. The first tool in this toolkit has been published recently for consultation – AI Management Essentials (a self-assessment tool for business). They are considering embedding the use of this toolkit in Government procurement processes – something that Synoptix would strongly support. However, Synoptix also has responded to the consultation, raising some concerns about areas that are not currently fully captured by the tool:  

  • Confusion between responding as a developer of AI systems, or as a user of AI systems (or both); 

  • Lack of clarity over good Risk Management practices, and how organisations can ensure that they are fairly evaluating the real-world risks of AI systems; 

  • Lack of definition of what constitutes Suitably Qualified and Experienced Personnel (SQEP) for evaluating risks and impacts of AI systems. 


The Nature of Regulation 

Mrs Clark set out both the societal and economic cases for an AI Assurance Ecosystem, which she framed as the key underpinning for a feasible regulatory landscape: 

  • Societal: the Minister highlighted the balance between the potential benefits of AI applied to public sector improvements, bringing great improvement to how the state interacts with citizens, but contrasts it against the risk of bias/discrimination, particularly when existing inequalities are present in vulnerable groups.  

  • Economic: the Minister raised the potential economic benefit both from direct AI activity, but also economic activity created by a flourishing AI Assurance ecosystem in its own right.  


There is no doubt that these points are valid, but it does also seem to fail to capture the scale of the problem. A number of speakers throughout the rest of the day highlighted challenges including: 

  • Challenges with developing pragmatic regulation – contrasting EU approaches with Japan and Singapore - Jeff Bullwinkel (VP and Deputy General Counsel, Microsoft EMEA); 

  • Existing assurance techniques (like audits) are still immature for AI systems, and there is a lack of agreement about what constitutes a “proper” audit for AI systems - Melissa Heikkilä (Senior Reporter, MIT Technology review). 


Synoptix also believes that there is a significant risk of risk-siloing through current assurance approaches. The Government’s principle-based approach to AI assurance regulation runs the risk of placing less emphasis on cross-cutting risk, especially if multiple teams within an organisation are involved in developing risk assessments and treatment strategies.  


Non-AI Transformation and Ethical Challenges 

Clark ended her speech by raising an important point that is too often missed in the current era of AI hype, which is that of non-AI elements to transformation. For example, she highlighted two important Government initiatives that will make meaningful difference to citizens’ interactions with government services: 

  • Gov.UK One – a Single Sign On service for government services, aimed to try and replace the current up to 190 different accounts, with 44 different sign-in methods, that are used to access Government services online; 

  • Gov.UK Forms – replacing outdated paper forms with standardised, convenient, and secure digital forms. Up until November 2024, over 87 forms have been created, with 165,000 submissions with estimated 3 years’ worth of time saved for form processors.  


Synoptix believes that it is crucial that we retain focus on system digital transformation, rather than pure implementation of currently popular technologies without reference to the problems that need to be solved. It is great to see that this is highlighted by the Government – without resolving “basic” digital blockers and implementing digital enablers, the potential from the integration of advanced technologies will continue to not achieve the lauded ROI as promised. 

 

Synoptix Continuing Investment 

Synoptix continues to invest into applying our experience with AI, systems engineering, and cybersecurity from high-assurance domains into AI Assurance and Governance. We are utilising our links with world-leading research institutions, such as Kings College London, University of Bristol, University of the West of England, Loughborough University and others to deliver both fundamental and applied research and innovation across the artificial intelligence domain space. Multi-disciplinary events, such as the Digital Ethics Summit from techUK, are vital to deliver real and impactful cross-sector change, and Synoptix is committed to further development in this area.


 

 

  Images courtesy of @TechUK





Synoptix Attend techUK Digital Ethics Summit

After attending the techUK Digital Ethics Summit last week, our Technical Innovation Manager, Callum Cockburn, reflects on Feryal Clark MP’s (Parliamentary Under-Secretary of State for AI and Digital Government, DSIT) keynote speech, incorporating other comments from throughout the rest of the summit. This event offered a valuable opportunity to connect with peers and industry leaders, as Synoptix continues to utilise our AI Practitioners, Cybersecurity Experts, and Systems Engineers to translate our experience from high-assurance industries towards AI Assurance and Governance.

 

Headline Takeaways 

  • The Government has a clear desire to drive an innovative AI sector that will scale and win globally – though it is unclear how they will overcome some of the significant geopolitical and commercial challenges with doing so. 

  • They intend to set out an AI Opportunities Action Plan that will set out how they intend to secure three critical dependencies: infrastructure, talent, and data access. 

  • They intend to formalise the AI Safety Institute in law as a key component of their plan for Responsible AI – aiming towards a goal of AI systems being used Safely, Ethically, and Responsibly. The legislation will also contain a highly targeted intervention for high-grade frontier AI models (formalising previously voluntary commitments into mandatory ones)  


Government Intentions 

It is very clear that the Government intends to continue to advance AI systems across Government, as well as pushing the advancement of the private sector to adopt AI solutions to improve productivity. The consolidation of cross-Government Digital Transformation under DSIT serves to concentrate Digital, Data, and Technology (DDaT) skillsets from across Government with the goal of reducing the burden placed by Government services upon people. This was echoed by Clark’s parliamentary colleague (Andrew Pakes MP, Labour MP for Peterborough), who raised the huge cross-party caucus that is highly excited about the potential for technological transformation.  


Mrs Clark also set out the intention of the Government to develop an AI Essentials toolkit that will sit on the AI Assurance platform, building on similar approaches in the Cybersecurity domain. The first tool in this toolkit has been published recently for consultation – AI Management Essentials (a self-assessment tool for business). They are considering embedding the use of this toolkit in Government procurement processes – something that Synoptix would strongly support. However, Synoptix also has responded to the consultation, raising some concerns about areas that are not currently fully captured by the tool:  

  • Confusion between responding as a developer of AI systems, or as a user of AI systems (or both); 

  • Lack of clarity over good Risk Management practices, and how organisations can ensure that they are fairly evaluating the real-world risks of AI systems; 

  • Lack of definition of what constitutes Suitably Qualified and Experienced Personnel (SQEP) for evaluating risks and impacts of AI systems. 


The Nature of Regulation 

Mrs Clark set out both the societal and economic cases for an AI Assurance Ecosystem, which she framed as the key underpinning for a feasible regulatory landscape: 

  • Societal: the Minister highlighted the balance between the potential benefits of AI applied to public sector improvements, bringing great improvement to how the state interacts with citizens, but contrasts it against the risk of bias/discrimination, particularly when existing inequalities are present in vulnerable groups.  

  • Economic: the Minister raised the potential economic benefit both from direct AI activity, but also economic activity created by a flourishing AI Assurance ecosystem in its own right.  


There is no doubt that these points are valid, but it does also seem to fail to capture the scale of the problem. A number of speakers throughout the rest of the day highlighted challenges including: 

  • Challenges with developing pragmatic regulation – contrasting EU approaches with Japan and Singapore - Jeff Bullwinkel (VP and Deputy General Counsel, Microsoft EMEA); 

  • Existing assurance techniques (like audits) are still immature for AI systems, and there is a lack of agreement about what constitutes a “proper” audit for AI systems - Melissa Heikkilä (Senior Reporter, MIT Technology review). 


Synoptix also believes that there is a significant risk of risk-siloing through current assurance approaches. The Government’s principle-based approach to AI assurance regulation runs the risk of placing less emphasis on cross-cutting risk, especially if multiple teams within an organisation are involved in developing risk assessments and treatment strategies.  


Non-AI Transformation and Ethical Challenges 

Clark ended her speech by raising an important point that is too often missed in the current era of AI hype, which is that of non-AI elements to transformation. For example, she highlighted two important Government initiatives that will make meaningful difference to citizens’ interactions with government services: 

  • Gov.UK One – a Single Sign On service for government services, aimed to try and replace the current up to 190 different accounts, with 44 different sign-in methods, that are used to access Government services online; 

  • Gov.UK Forms – replacing outdated paper forms with standardised, convenient, and secure digital forms. Up until November 2024, over 87 forms have been created, with 165,000 submissions with estimated 3 years’ worth of time saved for form processors.  


Synoptix believes that it is crucial that we retain focus on system digital transformation, rather than pure implementation of currently popular technologies without reference to the problems that need to be solved. It is great to see that this is highlighted by the Government – without resolving “basic” digital blockers and implementing digital enablers, the potential from the integration of advanced technologies will continue to not achieve the lauded ROI as promised. 

 

Synoptix Continuing Investment 

Synoptix continues to invest into applying our experience with AI, systems engineering, and cybersecurity from high-assurance domains into AI Assurance and Governance. We are utilising our links with world-leading research institutions, such as Kings College London, University of Bristol, University of the West of England, Loughborough University and others to deliver both fundamental and applied research and innovation across the artificial intelligence domain space. Multi-disciplinary events, such as the Digital Ethics Summit from techUK, are vital to deliver real and impactful cross-sector change, and Synoptix is committed to further development in this area.


 

 

  Images courtesy of @TechUK





bottom of page