Excerpts from the panel discussion on Trustworthiness in AI for CPS by eminent professors and industry experts at CyPhySS-2023 organized by AI4ICPS TIH at IIT Kharagpur.

CyPhySS, India’s largest annual summit on Cyber-Physical Systems (CPS), is hosted and organised by AI4ICPS at IIT Kharagpur from 20-22 July 2023 with Artificial Intelligence as its central theme. AI4ICPS is a National AI Hub for Interdisciplinary CPS (ICPS) formed under NM-ICPS national mission of DST, Government of India.

This 3-day CyPhySS Summit ended with a grand panel session on “Trustworthiness of AI based CPS and the Road Ahead”.

Panel Chair: Prof. Partha Pratim Chakrabarti

Prof. Partha Pratim Chakrabarti

Trustworthiness means reliable, responsible, reputable, faithful, fair, unbiased and transparent. In computing all of this has become technical. Something that maintains privacy yet provides openness for all while protecting the privacy of data and people.

Is CPS affecting the environment? Is it transparent? As these terms are evolving and people try to formulate in various ways, we put these related terms under the bucket of trustworthiness computer science.

This has not started overnight. We have a lot of history from reliable controls of the past. Cyber physical systems has been there as a control system and AI had taken to another domain and how does it affect the trustworthiness.

What are the societal iissues related to trust for such CPS system?

Prof. Siddharth Mukhopadhyay, IITKGP: I am not an expert. Definitions of AI goes narrow. Trust goes long ways. I have seen defence mission flights and nor chandrayan. I am amazed at what extent they go to make sure the project goes as it is planned. We spent billions of dollars in building systems we pray to God we would never use them. What if they are kept in a canister and 20 years later suddenly it is put to the war and csn that work? If it works then we csn Sat we csn trust! In aerospace also we use aerodynamic models that are almost approximate. An electrochemist laughs at the simple models an electrical uses. We expect that reality won’t deviate within the buffer. We all have seen that is not how it works. We test subsystems and we think the integration of them work as a complete system. Our trust depends essentially on tests at all the stages in design, development and deployment.

Prof. Siddhartha Mukhopadyay

AI or not. Non AI systems like aircraft also have millions and billions of aprsmerers that they continuously schedule. I don’t see AI as something that is brining new high dimensional problems that requires different kind of trust. This trustworthiness business a matter of culture. Failure has always been accepted provided the probability of failure is within the acceptable levels. For an Autonomous car you have to code the policy for famous trolley problem which is not the case e for a driver who will make a contextual decision and we simply say driver chose a kid over an old man when he has only two ways.

Prof. Bharadwaj Amrutur: Most complex intelligent systems we deal are humans. There see no formal guarantees. We don’t fully understand how we work. There are certain hardware structured coded in our brains that we must understand.

Prof. Bharadwaj Amrutur, IISc

There are many large degrees of freedom that we can’t mathematically model. The the example of a city with moving people. This is compelx problem. If we have to engineer such systems, we must get to ML and AI. This is the reality that we have to live with. We can’t probably use the millions of years of evolution of humans but we cam come up with the fundamental components that are trustworthy throguh lots of components that being run for many many thousands of hours before deployed. We also a ve to let the humans verify. Make it open source and build.any such systems in parallel, trust csn be worked throguh statistical analysis. We set the woman time to failure as insanely large. We need rigorous systems. We want the community to think together for this trust topic.

The guarantee becomes more and more difficult and we will be probably approximately correct and sometimes more and more harder.

Prof. Sanjoy Baruah: I head a center named Trustworthy AI. Deep learning based components are going to be increasingly used in safety critical systems. Three different ways: perception, comput-tion and control.

Prof. Sanjoy Baruah, WUIST

Perception: decode sensor and input data from cameras and lidsr and all. Neural networks out form in image and we don’t have mathematical definition for pedestrian. There is no AI csn recognize all the possible types of moving vehicles. This isn’t fundamentally different hoe a thermometer measures the temperature accurately.

Computation is the easiest of all. We train the neural network to do it faster. We know the output and we only speed it up by using computer tional complexity theory to let NN explain it’s compute tion.

Control: Algorithms for controlling assistance. Control algorithms In academic setting go with too many assumptions. We have to use AI for its performance gains but we have to gain enough safety cushion around it.

Prateep Mishra:

You need to build good data infrastructure for AI. There is alrge pipeline of data from sensors to edge to cloud to server and stages of data before it goes to a model. Data itself goes through a plant or IT plant and you need to build trust across this plant that has several architectural layers. You need to invest in not only data infra but also Metadata infra. Metadata is the key. What happens to data at ehat stage and we need someone to audit the chain and you have to do it any way for regulation. Things have to accessable and inter-operable and reusable.

Prateep Mishra, Chief Architect of IOT Connected Universe Platform at TCS

These are part of critical infrastructure. Lot of work is happening in the area of strong architecture. Securing ones perimeter is no longer enough. You have to use open source for digital or AI projects. We must manage the vulnerabilities. As a practitioner, we keep doing the patch updates that are extremely effort intensive. Then building standard reference anthology!

Dr. Prashanta Sarkar: We have been traditionally doing control algorithms over many decades at massive adoption. In case if I miss one equal to sign in a double equal to sign, code won’t work. Similarly if we reverse the assignment by having the left assignment to right and vice versa. This is why MISRA coding standard in C has come into play.

Dr. Prashanta Sarkar, GM, Tata Motors

Academics thinm theoritical and industry experts talk about the simple problems.

Prof. Chakrabarti: We saw performance versus guarantees. Can you touch upon transparency versus security?

Prof. Debdeep Mukhopadhyay: Safety and security are related to the concept of trust. This makes it extremely challenging. Factoring the aspect of adversary is difficult. Traditionally we have been looking at AI in isolation. AI in control loop and decision, a lot of scientific problems emerge. ML is subjected to Adversarial attacks by Myrtle changes in the input environment. We need to incorporate the CPS systems wigh reliability.

Prof. Debdeep Mukhopadhyay, IITKGP

We get suppliers send us components and they may have Trojans inside and how do you know your component and overall system is secured?

We know ML needs large amounts of data. We need reliable safe and unbiased models and at the same time we need to secure this from malacious entities and their Adversarial attacks! This si where encryption helps. How we can address encryption respecting the aspects of CPS. In AI for CPS, timeliness is extremely crucial. We need more education in security to develop more trustworthiness system.

AI is trained on existing data. What about data that doesn’t exist yet? How do you trust the certification done on old data?

Prof. Baruah: Certificate is based on the assumption that the training data is representativeof real world data. If you want deterministic correctness, you have to add a new component at the end of that existing AI Model to increase the performance. Dynamically changing certification is a nightmare.

Prof. Chakraborty: Models are trained to have drift detection and there is adoptable learning for such drift. There are various avatars called witness a rot, actor critic. We can have such systems test the CPS for its supposed working mode.

Prof. Pabitra Mitra: As humans we evolve to trust others with our experiences with them. Why can’t we apply the same to AI for CPS?

Prof. Baruah: don’t see any example that says trust just evolve over time esp. in CPS. We make a trust case by using a thing for long time and nothing has gone wrong is what we found after. Then we do have some trust there. I am little concerned if we over anthropologize and humanize.

Prof. Debdeep Mukhopadhyay: We need minimum guarantee in some critical systems.

Anand from IIIT Allahabad: Why there is no such kind of hardware security or standard specification so far?

Anand from IIIT Allahabad

Prof. Debdeep: Home automation systems are commercialized well and now we find there is not authentication or protection. Cryptography driven security is costly and unless it happens economically it can’t be accepted.

How can you trust a public model wherein the training data there may be malicious?

Prof. Chakrabarti: One need to do Adversarial training. Adversary always comes with newer and newer things.

Q) Bring a sense of belonging and the trust comes in. How do we bring this in AI with faithfulness

Prof. Chakrabarti: Panelists have reflected on two things. We want the systems to be human like. Lot of work is going on with RLHF ot reinforcement learning with human feedback. The machines and their architecture is different from humans architecture. The question of sanctions or taking things granted for is not

While we are building trust, let us have some faith in AI.

Conclusion by Prof. Chakraborti: Make the system anti-fragile. If we are attacked, we shall.make it much stronger. We believe we will be attacked and be ready and get stronger.

Leave a comment