For a long time artificial intelligence (AI) has smacked of a bit too cute, a toy, a supercharged Alexa; great at disbursing already established knowledge, but incapable of any real innovation.
I wrote two things about regulating AI. First, all AI installations must create their own power supply, with all the money involved they should be able to do that. Secondly, all AI productions must clearly indicate that it is an AI production at the beginning, during, and at the end of the productions with severe penalties for non-compliance.
I have only been writing code and automated systems design for about 50 years. IMHO there is no such thing as “AI” it is the tech salesman’s equivalent of the laundry detergent salesman’s “New and Improved” A computer can only do what it has been instructed to do and that can include learning new sequences or processes. It still will only do what it was told. Over the years I designed machinery and process systems that would self diagnose a failure. Many thought it was AI it was not just a method of programing that would see what condition was missing and tell you. Still I had many proclaiming it was AI even when I insisted it was not. Funny how people can fool them selves into something being what they want it to be instead of what it is.
I think the OEM’s and coders are looking for a pass on liability with the excuse of it was not thinking right we turned it off and we are all safe now. If you believe that I have a bridge to sell you in NYC.
I do have some issues with the talk of adding layers of safety to an “AI” system. In the industrial automation world including robots that the only safe process was to first build a failsafe safety environment then place the automation inside the safe zone. This has worked for some time. The failsafe system was mostly hardwired until redundant computer systems with fixed functional safety systems with a fail safe reactions to an internal failure. All very complicated but worth the cost to protect humans.
Many tend to promote what will help them succeed when it is a pet rock it's one thing a AI system is another. It is an automation system not an intelligence system and there are no safety rules as of yet to protect us from those systems. If there is any motion or physical apparatus being controlled it is not safe unless it is contained. The political members of the government are not the ones that should be buying this stuff they have no idea the safety issues that can come from letting things do things in automatic with out a guard rail to protect those in proximity to the devices being controlled.
I wrote two things about regulating AI. First, all AI installations must create their own power supply, with all the money involved they should be able to do that. Secondly, all AI productions must clearly indicate that it is an AI production at the beginning, during, and at the end of the productions with severe penalties for non-compliance.
It very much reminds me of the dotcom bubble. Amazon is the only one that survived.
I have only been writing code and automated systems design for about 50 years. IMHO there is no such thing as “AI” it is the tech salesman’s equivalent of the laundry detergent salesman’s “New and Improved” A computer can only do what it has been instructed to do and that can include learning new sequences or processes. It still will only do what it was told. Over the years I designed machinery and process systems that would self diagnose a failure. Many thought it was AI it was not just a method of programing that would see what condition was missing and tell you. Still I had many proclaiming it was AI even when I insisted it was not. Funny how people can fool them selves into something being what they want it to be instead of what it is.
I think the OEM’s and coders are looking for a pass on liability with the excuse of it was not thinking right we turned it off and we are all safe now. If you believe that I have a bridge to sell you in NYC.
I do have some issues with the talk of adding layers of safety to an “AI” system. In the industrial automation world including robots that the only safe process was to first build a failsafe safety environment then place the automation inside the safe zone. This has worked for some time. The failsafe system was mostly hardwired until redundant computer systems with fixed functional safety systems with a fail safe reactions to an internal failure. All very complicated but worth the cost to protect humans.
Many tend to promote what will help them succeed when it is a pet rock it's one thing a AI system is another. It is an automation system not an intelligence system and there are no safety rules as of yet to protect us from those systems. If there is any motion or physical apparatus being controlled it is not safe unless it is contained. The political members of the government are not the ones that should be buying this stuff they have no idea the safety issues that can come from letting things do things in automatic with out a guard rail to protect those in proximity to the devices being controlled.
God help us all.