EnnCore: End-to-End Conceptual Guarding of Neural Architectures

Lead Research Organisation: University of Manchester
Department Name: Computer Science

Abstract

EnnCore will address a fundamental security problem in neural-based (NB) architectures, allowing system designers to specify and verify a conceptual/behavioral hardcore to the system, which can be used to safeguard NB systems against unanticipated behavior and attacks. It will pioneer the dialogue between contemporary explainable neural models and full-stack neural software verification. We will, therefore, develop methods, algorithms and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behavior is guaranteed, and that are robust towards attacks.

EnnCore will be validated on three diverse and high-impact application scenarios: (i) securing an AI system for cancer diagnosis (health -- Cancer Research UK, The Christie); (ii) ensuring ethical and legal behavior of a conversational agent (health -- Cancer Research UK, The Christie); and (iii) securing an AI system for demand response (energy -- Urbanchain). The use cases will be co-designed and validated under real-world data conditions with the help of one clinical and one industrial partner.

As a result, EnnCore will address a fundamental research problem to ensure the security of neural-enabled components by taking into account its entire lifecycle from development to deployment. Solving this research problem will have a far-reaching impact on areas such as health care, ethically grounded AI and demand response, which heavily depend on secure and trusted software components to meet safety-critical requirements. Therefore, our overall research objective is to have a long-term impact on writing secure and trusted AI-based software components, thus contributing to a shared vision of fully-verifiable software, where underlying NB-based architectures are built with strong symbolic and mathematical guarantees.

To achieve this objective, EnnCore will design and validate a full-stack symbolic safeguarding system for NB architectures. This project will advance the state-of-the-art in the development of secure Deep Neural Network (DNN) models by mapping, using and extending explainability properties of existing neuro-symbolic DNN architectures (e.g., Graph Networks, Differentiable Inductive Logic Programming), thus safeguarding them with symbolic verification, abstract interpretation and program synthesis methods. EnnCore will pioneer the multi-disciplinary dialogue between explainable DNNs and formal verification.

In particular, EnnCore will deliver safeguarding for safety-critical NB architectures with the following novel properties:

(1) Full-stack symbolic software verification: we will develop the first bit-precise and scalable symbolic verification framework to reason over actual implementations of DNNs, thereby providing further guarantees of security properties concerning the underlying hardware and software, which are routinely ignored in existing literature.

(2) Explainability / Interpretability: EnnCore will pioneer the integration of knowledge-based and neural explainability methods to support end-users specifying security constraints and diagnosing security risks, in order to provide assurances about its security as NB models evolve. Particular attention will be given to the quantitative and qualitative characterization of semantic-drift phenomena in security scenarios.

(3) Scalable: we will systematically combine contemporary symbolic methods for explaining, interpreting and verifying neural representations. In particular, we will develop a neuro-symbolic safeguard framework by linking the structural knowledge-based representation elements to the attentional architecture elements, to achieve scalability and precision in an unprecedented manner. We will also develop new learning techniques for reusing information across different verification runs to reduce formulae size and consistently to improve constraint solving.

Planned Impact

EnnCore will be co-designed with the support of clinical and industrial partners in the fields of medical diagnosis and energy, respectivelly. The design of the supporting use case scenarios, requirements and benchmarks will be defined in coordination with our partners, thus mapping to their actual needs. This support from our partners will allow us to achieve medium- and long-term impact of regular users and maintainers of intelligent systems. Equally important, there exists an ever-growing demand in various areas of Computer Science and Engineering, to build fully-verifiable intelligent systems that are security-by-design. There exist various applications of this approach, including autonomous vehicles, telecommunication systems, process control systems, robotics systems, and automatic pilot avionics. The EnnCore tools will push the state-of-the-art on formal verification and explainability techniques to provide assurances about the security of AI applications and to explain their security properties.

Currently, two main aspects typically hinder further advances to build fully-verifiable intelligent systems that are security-by-design. First, there exists a lack of automated verification methods to detect low-level and behavior failures of AI applications. It happens since these verification methods must be able to incorporate knowledge about implementation aspects, system purpose and associated features. Second, due to the system's size and complexity, state-of-the-art formal verification techniques typically present scalability restrictions. Developing a custom and scalable symbolic verification method will allow us to ensure the security of intelligent systems, which is the main impact of this research.

EnnCore will target the leading conferences and Journals in AI, software verification and synthesis. The project aims to shape new sub-areas within the AI community. We want to contribute to shaping strategic emerging communities: (i) Explainable & Trustworthy AI, (ii) Debugging Neural Systems, (iii) Secure AI and (iv) Scalable Program Verification. Workshops and tutorials at the leading AI and formal verification conferences will be used as vehicles for engaging new communities.

All software and non-sensitive resources developed during the project will be available on open source/open data terms. The following general-purpose software frameworks and benchmarks will be made available: (i) neuro-symbolic safeguarding framework, including EnnCore-EXP and EnnCore-SYMB tools; (ii) benchmarks and supporting experimental Testbench for secure AI applications; (iii) software verification and testing benchmarks for Sv-Comp and Test-Comp competitions associated with the annual TACAS conference; and (iv) program synthesis benchmarks for SyGuS competition, which runs in conjunction with CAV and SYNT. These general-purpose software frameworks and benchmarks will impact the academic communities in two ways. First, we expect to demonstrate in the competition the efficacy of our approach. Since we will build tools that can be used to ensure verified trustworthy software systems, we expect that other groups will directly pick up on this. Second, the contributed benchmarks will influence the further development of software verification and synthesis tools for AI-based applications with a particular focus on safety and security aspects.

Lastly, the PI Cordeiro directs public engagement activities within the Department of Computer Science at the University of Manchester; he has funds (available via the department) to continue this work. Understanding how to educate and enthuse the public in general by using formal verification, security and AI, such as those related to this project, is a challenge that he enjoys and believes can be transformative to the future of computing in the UK.

Publications

10 25 50

publication icon
Agiakatsikas D (2024) Single Event Effects Assessment of UltraScale+ MPSoC Systems Under Atmospheric Radiation in IEEE Transactions on Reliability

publication icon
Alshareef A. (2023) Weight-based Semantic Testing Approach for Deep Neural Networks in CEUR Workshop Proceedings

 
Description NeuroCodeBench at SV-COMP 2023: a Plain C Neural Network Benchmark for Software Verification
Geographic Reach Multiple continents/international 
Policy Influence Type Contribution to new or improved professional practice
URL https://sv-comp.sosy-lab.org/
 
Title NeuroCodeBench: a plain C neural network benchmark for software verification 
Description Safety-critical systems with neural network components require strong guarantees. While existing neural network verification techniques have shown great progress towards this goal, they cannot prove the absence of software faults in the network implementation. This paper presents NeuroCodeBench - a verification benchmark for neural network code written in plain C. It contains 32 neural networks with 607 safety properties divided into 6 categories: maths library, activation functions, error-correcting networks, transfer function approximation, probability density estimation and reinforcement learning. Our preliminary evaluation shows that state-of-the-art software verifiers struggle to provide correct verdicts, due to their incomplete support of the standard C mathematical library and the complexity of larger neural networks. 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact This benchmark was accepted as part of the official SV-COMP benchmark. SV-COMP is the international competition of software verification. The inclusion of NeuroCodeBench in SV-COMP guarantees that all present and future software verification tools will be tested on neural network code. In this regard, the 2023 edition of SV-COMP in 2023 has already seen great changes in the behaviour of existing software verification tools. Indeed, the authors of around a dozen tools have improved their support of neural network code, thus avoiding the high number of incorrect results that we observed in our preliminary experiments. 
URL https://arxiv.org/abs/2309.03617
 
Title Peer-to-Peer, Self-Consumption and Transactive Energy Literature Review Data Extraction Table 
Description This document contains data extracted from 139 peer reviewed journal articles about peer-to-peer, community or collective self-consumption and transactive energy markets. The articles were identified through a systematic literature search. Full details of the search methodology are available in the methodology section of the following paper, published as pre-print on SSRN and available at http://dx.doi.org/10.2139/ssrn.3959620. Data has been extracted from each paper about the market presented in the paper, and the businesses which act in that market. Data about the market has been extracted using a modified version of The Business Ecosystem Architecture Modelling (TEAM) framework. Data about the businesses in those markets was extracted using a modified version of the Business Model Canvas. The unit of analysis for data extracted using the TEAM framework was a market and the unit of analysis for data extracted using the Business Model Canvas was a business. 
Type Of Material Database/Collection of data 
Year Produced 2022 
Provided To Others? Yes  
Impact Two systematic literature reviews on local energy trading markets have been published based on the dataset. 
URL https://figshare.manchester.ac.uk/articles/dataset/Peer-to-Peer_Self-Consumption_and_Transactive_Ene...
 
Title Probe-ably: open source neural probing framework 
Description Probe-Ably is a framework designed for PyTorch to support researchers in the implementation of probes for neural representations in a flexible and extensible way. 
Type Of Material Computer model/algorithm 
Year Produced 2021 
Provided To Others? Yes  
Impact Framework for facilitating the inspection of representational aspects of neural architectures. 
URL https://github.com/ai-systems/Probe-Ably
 
Title The FormAI Dataset: Generative AI in Software Security through the Lens of Formal Verification 
Description FormAI is a novel AI-generated dataset comprising 112,000 compilable and independent C programs. All the programs in the dataset were generated by GPT-3.5-turbo using a dynamic zero-shot prompting technique and comprised programs with varying levels of complexity. Some programs handle complicated tasks such as network management, table games, or encryption, while others deal with simpler tasks like string manipulation. Each program is labeled based on vulnerabilities in the code using a formal verification method based on the Efficient SMT-based Bounded Model Checker (ESBMC). This strategy conclusively identifies vulnerabilities without reporting false positives (due to the presence of counter-examples) or false negatives (up to a certain bound). The labeled samples can be utilized to train Large Language Models (LLMs) since they contain the exact program location of the software vulnerability. 
Type Of Material Database/Collection of data 
Year Produced 2023 
Provided To Others? Yes  
Impact The formAI dataset reveals that GPT-3.5-turbo introduces vulnerabilities when generating C code. The diverse set of programming scenarios exposed risky coding strategies and practices used by GPT-3.5 when handling certain tasks. We employed a zero-shot prompting method to encompass numerous programming situations for C code generation using GPT-3.5-turbo. We used the ESBMC bounded model checker to produce formally verifiable labels for bugs and vulnerabilities. In our experiment, we allocated a verification time of 30 seconds to each program, with the unwinding parameter set to 1. In total 197800 vulnerabilities were detected by ESBMC. Some programs contain multiple different errors. The labeling is provided in a .csv file, which includes: Filename, Vulnerability type, Function name, Line number, and Error type. In addition, we offer a separate .csv file containing the C code as a separate column. Finally, we connected the identified vulnerabilities to CWE identifiers. Based on these findings, we answer our research questions: - RQ1: How likely is purely LLM-generated code to contain vulnerabilities on the first output when using simple zero-shot text-based prompts? Answer: At least 51.24\% of the samples from the 112,000 C programs contain vulnerabilities. This indicates that GPT-3.5 often produces vulnerable code. Therefore, one should exercise caution when considering its output for real-world projects. - RQ2: What are the most typical vulnerabilities LLMs introduce when generating code? Answer: For GPT-3.5: Arithmetic Overflow, Array Bounds Violation, Buffer Overflow, and various Dereference Failure issues were among the most common vulnerabilities. These vulnerabilities are pertinent to MITRE's Top 25 list of CWEs. 
URL https://github.com/FormAI-Dataset
 
Description Privacy-friendly Electricity Consumption Prediction 
Organisation University of Leuven
Country Belgium 
Sector Academic/University 
PI Contribution We have been involved in discussions, design and implementation of privacy-friendly electricity consumption prediction algorithms.
Collaborator Contribution Colleagues from Univerity of Leuven designed and evaluated our designs.
Impact A report has been produced.
Start Year 2021
 
Description Towards safety of clinical decision support systems 
Organisation Cancer Research UK
Department Manchester Institute
Country United Kingdom 
Sector Academic/University 
PI Contribution This is a ongoing research collaboration where we are exploring how to improve the safety of clinical decision support systems, with a particular emphasis in oncology.
Collaborator Contribution - Definition of a problem space which is clinically relevant. - Access to domain expertise in oncology. - Critical assessment of models.
Impact - This is still ongoing research. Publications and safe versions of classification methods are expected to be released soon.
Start Year 2021
 
Title CEG4N: Counter-Example Guided Neural Network Quantization Refinement 
Description We propose Counter-Example Guided Neural Network Quantization Refinement (CEG4N). This technique combines search-based quantization and equivalence verification: the former minimizes the computational requirements, while the latter guarantees that the network's output does not change after quantization. 
Type Of Technology Software 
Year Produced 2022 
Open Source License? Yes  
Impact Neural networks are essential components of learning-based software systems. However, their high compute, memory, and power requirements make using them in low resources domains challenging. For this reason, neural networks are often quantized before deployment. Existing quantization techniques tend to degrade the network accuracy. We evaluate CEG4N on a diverse set of benchmarks that include large and small networks. Our technique was successful at quantizing the networks in our evaluation while producing models with up to 72% better accuracy than state-of-the-art techniques. 
URL https://easychair.org/smart-program/FLoC2022/paper2189.html
 
Title Certified Private Inference on Neural Networks via Lipschitz-Guided Abstraction Refinement 
Description Private inference on neural networks requires running all the computation on encrypted data. Unfortunately, neural networks contain a large number of non-arithmetic operations, such as ReLU activation functions and max pooling layers, which incur a high latency cost in their encrypted form. To address this issue, the majority of private inference methods replace some or all of the non-arithmetic operations with a polynomial approximation. This step introduces approximation errors that can substantially alter the output of the neural network and decrease its predictive performance. In this paper, we propose a Lipschitz-Guided Abstraction Refinement method (LiGAR), which provides strong guarantees on the global approximation error. Our method is iterative, and leverages state-of-the-art Lipschitz constant estimation techniques to produce increasingly tighter bounds on the worst-case error. At each iteration, LiGAR designs the least expensive polynomial approximation by solving the dual of the corresponding optimization problem. Our preliminary experiments show that LiGAR can easily converge to the optimum on medium-sized neural networks. 
Type Of Technology Software 
Year Produced 2023 
Open Source License? Yes  
URL https://zenodo.org/record/7897617
 
Title EBF 4.2: Black-Box Cooperative Verification for Concurrent Programs 
Description Combining different verification and testing techniques together could, at least in theory, achieve better results than each individual one on its own. The challenge in doing so is how to take advantage of the strengths of each technique while compensating for their weaknesses. EBF 4.2 addresses this challenge for concurrency vulnerabilities by creating Ensembles of Bounded model checkers and gray-box Fuzzers. In contrast with portfolios, which simply run all possible techniques in parallel, EBF strives to obtain closer cooperation between them. This goal is achieved in a black-box fashion. On the one hand, the model checkers are forced to provide seeds to the fuzzers by injecting additional vulnerabilities in the program under test. On the other hand, off-the-shelf fuzzers are forced to explore different interleavings by adding lightweight instrumentation and systematically re-seeding them. 
Type Of Technology Software 
Year Produced 2023 
Open Source License? Yes  
Impact EBF 4.2 is an improvement over a previous tool by the same authors which found a vulnerability in the cryptography library WolfMQQT. EBF participated in the SV-COMP'21 and SV-COMP'22 software verification competitions with promising results. 
URL https://research.manchester.ac.uk/en/publications/ebf-42-black-box-cooperative-verification-for-conc...
 
Title FuSeBMC: A White-Box Fuzzer for Finding Security Vulnerabilities in C Programs 
Description FuSeBMC is a test generator for finding security vulnerabilities in C programs. FuSeBMC incrementally injects labels to guide Bounded Model Checking (BMC) and Evolutionary Fuzzing engines to produce test cases for code coverage and bug finding. Additionally, FuSeBMC uses both engines to produce smart seeds. First, the engines run with a short time limit on a lightly instrumented version of the program to generate the seeds. The BMC engine is particularly useful in producing seeds that can pass through complex mathematical guards. Then, FuSeBMC runs its engines with extended time limits using the smart seeds created in the previous round. FuSeBMC manages this process in two main ways using its Tracer subsystem. Firstly, it uses shared memory to record the labels covered by each test case. Secondly, it evaluates test cases, and those of high impact are turned into seeds for subsequent test fuzzing. We evaluated FuSeBMC by participating in Test-Comp 2022 to test the tool's ability in two categories of the competition: code coverage and bug detection. The Test-Comp 2022 results show that we significantly increased our code coverage score from last year, outperforming all tools in all categories. During years (2021 & 2022), FuSeBMC achieved six awards (4 Gold Medals, 1 Silver Medal, 1 Bronze Medal). 
Type Of Technology Software 
Year Produced 2021 
Open Source License? Yes  
Impact FuSeBMC is part of a broader effort from the software testing community to find security vulnerabilities in programs. Since its first publication in 2021, FuSeBMC has attracted the attention of several researchers both from academia and industry. As FuSeBMC is currently the only test case generator that efficiently and effectively combines various Fuzzing and BMC techniques, we aim to verify industrial-grade learning-based systems implemented in C in the following years of the EnnCore award. 
URL https://ssvlab.github.io/lucasccordeiro/papers/fase2022.pdf
 
Title QNNVerifier: A Tool for Verifying Neural Networks using SMT-Based Model Checking 
Description QNNVerifier is the first open-source tool for verifying implementations of neural networks that takes into account the finite word-length (i.e. quantization) of their operands. The novel support for quantization is achieved by employing state-of-the-art software model checking (SMC) techniques. It translates the implementation of neural networks to a decidable fragment of first-order logic based on satisfiability modulo theories (SMT). The effects of fixed- and floating-point operations are represented through direct implementations given a hardware-determined precision. Furthermore, QNNVerifier allows to specify bespoke safety properties and verify the resulting model with different verification strategies (incremental and k-induction) and SMT solvers. Finally, QNNVerifier is the first tool that combines invariant inference via interval analysis and discretization of non-linear activation functions to speed up the verification of neural networks by orders of magnitude. 
Type Of Technology Software 
Year Produced 2021 
Open Source License? Yes  
Impact QNNVerifier is part of a broader effort from the verification community to provide guarantees on the safety of neural networks. Since its publication (about three months ago), QNNVerifier has attracted the attention of several researchers both from academia and industry. As QNNVerifier is currently the only verification tool capable of analysing the software implementation of neural networks, we are aiming to verify industrial-grade learning-based systems in the following years of the EnnCore award. 
URL https://arxiv.org/abs/2111.13110
 
Title Towards Global Neural Network Abstractions with Locally-Exact Reconstruction: Data, Networks and Code 
Description Data, neural networks and code of the experiments in the paper "Towards Global Neural Network Abstractions with Locally-Exact Reconstruction". A stand-alone Python library to compute GINNACER is included, as well as state-of-the-art global and local abstraction alternatives. 
Type Of Technology Software 
Year Produced 2023 
Open Source License? Yes  
URL https://zenodo.org/record/7235766
 
Description AFRiTS 2023: Workshop on Automated Formal Reasoning for Trustworthy AI Systems 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact We have organised the AFRiTS workshop as part of the 26th Brazilian Symposium on Formal Methods (SBMF 2023). The workshop aimed at bringing together researchers working on challenges at the intersection between artificial intelligence (AI) and cyber-security, including security for AI and AI for security. Participants left having a better understanding of essential conceptual/behavioral properties of neural-based systems, the challenges towards verifying them, and thus safeguard the system against unpredictable behavior and attacks.
Year(s) Of Engagement Activity 2023
URL https://easychair.org/cfp/afrits-2023
 
Description Citizen Jury for the Greater Manchester Local Energy Market 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach Regional
Primary Audience Public/other audiences
Results and Impact We have acted as witness expert in a citizen jury organised by Carbon Co-op. The aim of the jury was to involve citizens in decision making around the governance of the Greater Manchester Local Energy Market. We explained to the jurors what a local energy market is and how it works. The Greater Manchester Combined Authority have published a blog which summarises the findings.
Year(s) Of Engagement Activity 2022
URL https://gmgreencity.com/gm-local-energy-market-what-the-people-think/
 
Description Expert member of the Global Observatory on Peer-to-peer energy trading (GO-P2P) 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact We are involved in the Global Observatory on Peer-to-Peer Energy Trading (GO-P2P) which is a Technology Collaboration Program by the International Energy Agency. It is a forum for international collaboration to understand the policy, regulatory, social and technological conditions necessary for the wider deployment of peer-to-peer projects. We are part of the teams involved in sub-task 1 that investigates the impact of peer-to-peer markets on the physical grid, sub-task 2 on hardware, software and data, and the special task force on blockchain technologies, and subtask 3 on Transactions and markets. Outcome of our efforts have served as policu recommendations on P2P markets.
Year(s) Of Engagement Activity 2021,2022,2023
URL https://userstcp.org/task/peer-to-peer-energy-trading/
 
Description Invited expert at workshops oganised by BSI Energy Smart Appliances (ESA) programme 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Professional Practitioners
Results and Impact We have been invited as an expert to three workshops organised by BSI Energy Smart Appliances (ESA) programme to set the road-map for addressing cyber security challenges in smart energy grid.
Year(s) Of Engagement Activity 2021
 
Description Invited speaker at two-day event on Energy Systems Digitalisation and Cyber Security 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact We have been invited as an expert at Int. Workshops on Energy Systems Digitalisation and Cyber Security to identify the main challenges faced by the smart grid energy community.
Year(s) Of Engagement Activity 2021
 
Description Invited talk "Privacy-friendly P2P energy trading markets" at RISE, Sweden 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Industry/Business
Results and Impact 15 participants attended the talk and were introduced with the privacy risks in the energy domain.
Year(s) Of Engagement Activity 2023
 
Description Invited talk at SCIDOCA 2023 workshop, Japan 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Invited talk regarding the work "Learning Disentangled Representations for Natural Language Definitions" at the Seventh International Workshop on Scientific Document Analysis (SCIDOCA 2023), held in Kumamoto, Japan. The prospective impacts of this work on the analysis of scientific and legal documents were discussed.
Year(s) Of Engagement Activity 2023
URL https://www.jaist.ac.jp/event/SCIDOCA/2023/
 
Description Invited talk at the IDIAP Research Institute, Switzerland 
Form Of Engagement Activity A talk or presentation
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact Invited talk at the IDIAP Research Institute in Martigny, Switzerland. IDIAP is an institute that focuses on AI-centric interdisciplinary research. Our talk presented our latest finding on testing large language models, and encouraged the attendees to consider a new and more accurate framework for evaluation based on metamorphic testing. Several attendees reported that the talk was incredibly thought-provoking, and expressed keen interest in introducing better evaluation techniques in their own research.
Year(s) Of Engagement Activity 2023
 
Description Panel at IEEE ISGT-Europe 2023 "Smart Meters: the gateway to user-centric energy services" 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Third sector organisations
Results and Impact 20+ representatives of industry/business, third sector and postgraduate students attended this panel session, where the security and privacy issues in the energy sector were raised.
Year(s) Of Engagement Activity 2023
URL https://attend.ieee.org/isgt-europe-2023/special-sessions/
 
Description Panel session at UoM CDTS 2023 forum on "AI transparency, ethics and security" 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Third sector organisations
Results and Impact 120 participants attended the annual CDTS 2023 forum where a special panel session on "AI transparency, ethics and security" was organised.
Year(s) Of Engagement Activity 2023
URL https://express.adobe.com/page/NfBxVTfgjugY0/
 
Description Panel: How is Artificial Intelligence influencing Formal Methods? Challenges and opportunities 
Form Of Engagement Activity A formal working group, expert panel or dialogue
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Other audiences
Results and Impact Discussion panel at a scientific conference, with experts from both the academic and industrial world. The topic was how the current AI technology trend has impacted the formal methods field. Also, how formal methods can be applied to support the existing AI techniques to improve trustworthiness. The panel sparked a lively discussion on the feasibility and limitations of AI for formal methods.
Year(s) Of Engagement Activity 2023
URL https://sbmf23.ufam.edu.br/
 
Description School - Secure P2P energy trading: From theory to practice 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Postgraduate students
Results and Impact We have participated in the organisation of an international school on secure and privacy-friendly peer-to-peer energy trading. Around 60 participants (postgraduate students and industry) attended the school and actively engaged on discussions. Various collaborations have been seeded at this event.
Year(s) Of Engagement Activity 2022
URL https://www.esat.kuleuven.be/cosic/events/snippet-school-2022/
 
Description We have organized a special session at SafeAI 2022 as part of the AAAI's Workshop on Artificial Intelligence Safety (https://safeai.webs.upv.es/index.php/enncore-special-session/). 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach International
Primary Audience Professional Practitioners
Results and Impact Context: EnnCore addresses the fundamental security problem of guaranteeing safety, transparency, and robustness in neural-based architectures. Specifically, EnnCore aims at enabling system designers to specify essential conceptual/behavioural properties of neural-based systems, verify them, and thus safeguard the system against unpredictable behaviour and attacks. In this respect, EnnCore pioneers the dialogue between contemporary explainable neural models and full-stack neural software verification.

Objective: This special session discussed existing studies' limitations, our research objectives, current achievements, and future trends to achieve explainable and fully-verifiable learning-based systems. In particular, we discussed the development and evaluation of new methods, algorithms, and tools to achieve fully-verifiable intelligent systems, which are explainable, whose correct behaviour is guaranteed, and robust against attacks. We also described how EnnCore is being validated on two diverse and high-impact application scenarios: securing an AI system for (i) cancer diagnosis and (ii) energy demand response.
Year(s) Of Engagement Activity 2022
URL https://safeai.webs.upv.es/index.php/enncore-special-session/
 
Description Workshop on "Security for all in an AI enabled society" 
Form Of Engagement Activity Participation in an activity, workshop or similar
Part Of Official Scheme? No
Geographic Reach National
Primary Audience Study participants or study members
Results and Impact 20 participants attended the workshop that focused on the topic "Security for all in an AI enabled society"
Year(s) Of Engagement Activity 2023
URL https://enncore.github.io/events/secaiws/