Satoshi Nakamoto Blog
Image default
ai research AI Trends Insider autonomous cars Robotics Self Driving Cars

AI Cobots and Exoskeletons: The Case of AI Self-Driving Cars



By Lance Eliot, the AI Trends Insider

I remember the first time that I saw a cobot in action. It was on a factory floor where I had previously helped put in place a robotic arm that performed automotive parts assembly. This cobot, considered a collaborative robot or a co-robot, or some assert it should be referred to as a “cooperative” robot, contained some of the latest new tech in AI.

First, let me tell you about the “aged” robotic arm that I had put in place a few years earlier. It was enclosed in an overall steel-mesh cage that served to prevent humans from getting too close to the swinging mechanical arm. With the speed and strength of the robotic arm, a human caught unawares and within the range of the arm would surely get injured. There was little to almost no sensory capability on the robotic arm for detecting any intrusions into its operating space. This somewhat low-tech “dumb” robotic arm was able to swing back-and-forth unimpeded to do its job.

In the case of the cobot, it was designed and built to be near to humans and work in unison with humans. I had done some of the underlying research in my university lab when I was a professor that involved several projects pioneering cobot early development. Now, I had been invited to see a cobot in action on a factory floor, doing so within earshot of the old-timer robotic arm that I had programmed some years prior.

The cobot had no protective cage surrounding it. Indeed, there was a workstation just a few feet from the cobot that housed a human worker. The human worker would do part of an assembly and then hand over the partially done part to the cobot. The cobot then did its effort of furthering the assembly. Once the cobot had finished, it then slid the part back to the human worker. The human worker made a few finishing touches and then placed the part onto a conveyer belt that would take it to the installation phase of the manufacturing process.

There were some AI developers that were hoping to ultimately replace the human worker by further refining the cobot, allowing the cobot to do the entire effort of the assembly for the part (ultimately aiming to be a fully “lights out” warehouse operation without any human workers per se). At this point in time, the intricate aspects of the assembly required a great deal of dexterity. The cobot was mainly an arm with pincher-like grippers and lacked any fully articulating robotic fingers. It was up to the human to use their own human hands and human fingers for the fine task of weaving wiring throughout the part.

This splitting of the task made good sense in that the robotic arm could quickly undertake its effort and was able to do so as a “partner” with the human. If the cobot could have done all of its work and completed the part assembly sufficiently that the part could move forward to the installation phase, you probably would not have needed the human and cobot to work in unison. Instead, because the cobot was essentially in the middle of the assembly, taking a partial assembly from the human, and handing back a further partially assembled part, the intermixing of the human and cobot made sense.

To avoid having the human in-the-loop, it would be necessary to either get a cobot that had greater dexterity with human-like robotic fingers or consider redesigning the part so that it could be assembled differently. The AI developers had studied the existing assembly process that had previously involved humans doing the entire three steps, the pre-wiring, the assembly, and the post-wiring, and had decided that since they were told that the redesign of the part was not going to occur, they would then craft a cobot to take on the step 2 in the process.

I earlier mentioned that there was a workstation adjacent to the cobot that housed the human assembler operator. In actuality there were four such workstations in a star-like pattern that were situated around the cobot. The cobot would do its thing for one human operator, and then swing to the next human operator to take their partially assembled part, and so on, doing this for each human in a successive sequence.

Let’s number the humans as operators H1, H2, H3, and H4.

H1 does pre-wiring, the cobot grabs the part, does its thing, and slides the part back over to human H1 for the operator to do the post-wiring and then finish the assembly. The moment that the part was sliding over to H1 from the cobot, the cobot was already swinging next to human H2. The cobot would take the pre-wired part from H2, continue the assembly, and slide it back over to human H2. Swinging to human H3, the cobot would take the same actions, and then swing to human H4, after which the cobot would swing once again to human H1 and start the loop again. This was repeated over and over.

I suppose a human sitting where the cobot was anchored might get dizzy of all-day swinging around and around in a tight loop. Not the cobot. It did its thing without complaint or nausea. The four humans that sat within arm’s reach were able to somewhat chat about the cobot amongst themselves (the cobot was relatively silent and did not make much noise, there was no squeaking or beeping, but the factory floor was overall quite noisy and so the humans had to speak-up to be heard over the din of the factory itself).

Cobot in the Loop

The cobot was not listening to the human workers per se. It did though have an audio input capability akin to Alexa or Siri. A human worker could yell out a codeword to get the attention of the cobot and then state a command, such as “stop!” or other verbal instructions. Admittedly, I did wonder if the cobot might be “listening” or even recording the scuttlebutt being spoken by the four humans surrounding the cobot. The head of the factory insisted that the cobot audio detection was only scanning for the activation codeword and otherwise was not recording anything. I’ll assume this was a truthful explanation (but, if I was one of the human workers, I would be skeptical!).

If a human worker was late in handing over their part, the cobot would skip the human and proceed to the next human worker in the sequence. Pretend that human H2 had fallen behind. When the cobot finished the effort underway with human H1, it would swing to H2, but if H2 did not readily handover the part, the cobot would swing to human H3. At that juncture, the human H3 might not yet be ready, since the person wasn’t expecting the cobot at that moment. The cobot would wait for the human H3 to be ready.

The cobot had been programmed to keep track of the number of times that any of the human workers in its star were late in doing their respective assembly. This lateness metric was then provided to a human supervisor. If the human supervisor noticed that a human worker was falling behind excessively, the human supervisor would come over to talk with the human worker to find out what was going on.

In that sense, the human workers of the star, humans H1, H2, H3, H4 considered the cobot to be a bit of a tattletale. I suppose that if the cobot could really talk, it would tell the humans H1, H2, H3, and H4 that there was nothing it could do to prevent itself from being a tattletale. It had been programmed to do so. What do you want it to do, lie to the boss?

Anyway, the humans in a star were working steadily and did not have much time to do any idle banter or messing around. The movement of the cobot served to also be a reminder of the timing involved in getting your human work done. As soon as the cobot returned your part to you, you knew that you had to finish the pre-wiring, put the part onto the conveyor belt, get the next part, do the pre-wiring, and be ready by the time the cobot had swung around in sequence and ended-up back at your workstation.

I noticed that the human workers would keep their eye on the cobot. At first, I thought that might be due to concerns about getting whacked by the cobot.

Allow me to explain.

My years earlier robotic arm was safely nestled inside a steel cage, unlike the cobot. The only chance of hurting a human with the aged robotic arm would be that a human would need to open the door to the steel cage and go into the area reserved for the robotic arm. This would be stupid to do. In fact, it would be nearly impossible to get yourself hit by the robotic arm because the moment you opened the cage door, I had programmed the robotic arm to come to a halt. This was a prudent safety precaution.

For the cobot, the four humans arrayed in a star around the cobot were all and each completely vulnerable to potentially getting hit by the cobot. Suppose that the cobot “lost its mind” and went wild, swinging itself around and waving its arm crazily. Those four humans would likely get hit. The speed of the cobot was so fast that I doubted that any of the humans would have been able to duck or retreat prior to getting hit. The only way I could envision avoiding getting hit would be if the cobot made some preliminary indication that it was going berserk, in which case the humans might have sufficient time to hide or run away.

To prevent the cobot from going into a human-damaging berserk mode, the AI developers had put sensors onto the cobot that were intended to detect the chance of hitting a human. If you put your human arm up and placed it into the path of the cobot, the cobot would detect the intrusion and would stop itself from moving in that direction. It would also emit a prerecorded message telling the nearby humans that there was an intrusion in the path of the cobot.

I tried this to see how well-programmed the cobot was. I sat at a workstation and jammed my arm into the path where the cobot would soon be traveling. Sure enough, it detected my arm and did the proper stoppage procedure. I wondered whether I could “trick” the cobot by swinging my arm into the path at the last possible moment, tempting fate by not allowing enough time possibly for the cobot to make the detection and then come to a halt.

Since I didn’t want to potentially lose my actual arm, I used a stick instead. I raised up the stick with a last split second to go before the cobot arm would have reached my workstation. I was relieved to see that the cobot detected the stick and once again exercised the proper stoppage operation. The stopping time was quite quick and the detection capability seemed robust. Generally, I deduced that the safety feature was likely good enough that it would be some oddball quirk before a human could get hurt.

When I say that it could be a quirk, we don’t know how well the cobot had been exhaustively tested. Maybe there were some unknown or hidden loopholes in the detection or the stopping procedure. Maybe there are some bugs in it. Who knows? Knowing that you are sitting eight hours a day within the grasp of a gorilla that could tear off your limbs, this for me would be somewhat disconcerting.

For the debugging of complex systems, see my article: https://aitrends.com/selfdrivingcars/debugging-of-ai-self-driving-cars/

For ghosts in complex systems, see my article: https://aitrends.com/selfdrivingcars/ghosts-in-ai-self-driving-cars/

For the dangers of code obfuscation, see my article: https://aitrends.com/selfdrivingcars/code-obfuscation-for-ai-self-driving-cars/

For safety aspects, see my article: https://aitrends.com/selfdrivingcars/safety-and-ai-self-driving-cars-world-safety-summit-on-autonomous-tech/

The human workers though had seemingly become comfortable with the cobot. The only reason they were staring at it was that they knew the troubles they would get into if they weren’t ready when the cobot showed-up at their workstation.

For Amy, sitting at workstation H1, she knew that once the cobot had reached Eric at workstation H3, it was time for her to have placed the part onto the conveyor belt and already be starting the pre-wiring of the next part that was intended for her effort. Had Amy seen that the cobot was at Judith on workstation H4, it would suggest that Amy was going to be late in having her part ready when the cobot got finished at H4.

It was almost as though the human workers hoped they could mentally convey to the cobot to slow down when they needed it to let them catch-up. A human sitting in the cobot’s position could perhaps be negotiated or pleaded with. Hey, give me a break, will you, and nudge down just a fraction of a moment to let me take a breath and be ready for when you swing over to me. There was no such negotiation or discussion with the cobot.

After “visiting” with the cobot and observing it in action, along with watching the human workers and discussing the cobot with them, I next went into a conference room that was attached to the factory floor. There the R&D group showed me a prototype of a cobot exoskeleton.

In case you’ve not seen a cobot exoskeleton, imagine that you are wearing a kind of suit of armor, but it is in a skeleton or skeletal kind of shape. Some refer to these as exoframes or exosuits.

There are “dumb” exoskeletons that exist today for allowing humans to lift heavy weights. You put on the exoskeleton. You get comfortable with how it operates. You can then grab a heavy box to be lifted. The exoskeleton takes the brunt of the weight and the lifting. You can repeatedly lift heavy objects and by-and-large the exoskeleton is taking the strain and pressure.

In some cases, the exoskeleton is purely mechanical and unpowered. In other cases, the exoskeletion is powered and it is electrical or battery power that enables it to especially do the heavy lifting or take on similar kinds of tasks. You can use the exoskeleton for more than just heavy lifting of objects. Suppose you are having to hammer a nail that is on the ceiling and it is going to take a long time to hammer that nail. Your arms reaching over your head will eventually seem to get heavy and it will be hard for you to keep your arms raised. With an exoskeleton, you could likely keep your arms raised all day long and not feel much pain or angst in doing so.

Cobot Exoskeleton a “Smart” Version of an Exoskeleton

A cobot exoskeleton is considered a “smart” version of an exoskeleton. The notion is to add AI to the exoskeleton and turn it into a collaborative robot that you wear. The cobot that I had seen on the factory floor was pretty much anchored into a specific spot on the floor. It was not moving and was not intended to be portable. Suppose instead that a human wore a cobot, allowing the human and the cobot to potentially move around. Ergo, a cobot exoskeleton.

For the aged robotic arm that I had setup for the factory, some of the arm’s instructions had been programmed, while other aspects of the arm’s efforts were undertaken by a Machine Learning (ML) approach. We had moved the robotic arm to show it what we wanted it to do, and after repeated such guidance it gradually “programmed” to how we wanted it to perform. Some cobots have a similar capability of using deep learning or Machine Learning. An advanced cobot exoskeleton would likewise have such a ML feature.

For my article about imitation as a deep learning technique, see: https://aitrends.com/selfdrivingcars/imitation-deep-learning-technique-self-driving-cars/

For Ensemble Machine Learning, see my article: https://aitrends.com/ai-insider/ensemble-machine-learning-for-ai-self-driving-cars/

For Federated Machine Learning, see my article: https://aitrends.com/selfdrivingcars/federated-machine-learning-for-ai-self-driving-cars/

For my article about biomimicry, see: https://aitrends.com/selfdrivingcars/biomimicry-robomimicry-ai-self-driving-cars-machine-learning-nature/

What does this have to do with AI self-driving cars?

At the Cybernetic AI Self-Driving Car Institute, we are developing AI software for self-driving cars. One interesting exploratory project involves the use of a cobot exoskeleton for purposes of aiding a human in the driving of a car. I realize this seems a rather farfetched approach to driving, and I agree it seems an unlikely path toward the autonomous or semi-autonomous driving of cars, but I figured you’d be intrigued by the idea and want to know about it.

Allow me to elaborate.

I’d like to first clarify and introduce the notion that there are varying levels of AI self-driving cars. The topmost level is considered Level 5. A Level 5 self-driving car is one that is being driven by the AI and there is no human driver involved. For the design of Level 5 self-driving cars, the auto makers are even removing the gas pedal, brake pedal, and steering wheel, since those are contraptions used by human drivers. The Level 5 self-driving car is not being driven by a human and nor is there an expectation that a human driver will be present in the self-driving car. It’s all on the shoulders of the AI to drive the car.

For self-driving cars less than a Level 5, there must be a human driver present in the car. The human driver is currently considered the responsible party for the acts of the car. The AI and the human driver are co-sharing the driving task. In spite of this co-sharing, the human is supposed to remain fully immersed into the driving task and be ready at all times to perform the driving task. I’ve repeatedly warned about the dangers of this co-sharing arrangement and predicted it will produce many untoward results.

For my overall framework about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/framework-ai-self-driving-driverless-cars-big-picture/

For the levels of self-driving cars, see my article: https://aitrends.com/selfdrivingcars/richter-scale-levels-self-driving-cars/

For why AI Level 5 self-driving cars are like a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For the dangers of co-sharing the driving task, see my article: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

Let’s focus herein on the true Level 5 self-driving car. Much of the comments apply to the less than Level 5 self-driving cars too, but the fully autonomous AI self-driving car will receive the most attention in this discussion.

Here’s the usual steps involved in the AI driving task:

  • Sensor data collection and interpretation
  • Sensor fusion
  • Virtual world model updating
  • AI action planning
  • Car controls command issuance

Another key aspect of AI self-driving cars is that they will be driving on our roadways in the midst of human driven cars too. There are some pundits of AI self-driving cars that continually refer to a utopian world in which there are only AI self-driving cars on the public roads. Currently there are about 250+ million conventional cars in the United States alone, and those cars are not going to magically disappear or become true Level 5 AI self-driving cars overnight.

Indeed, the use of human driven cars will last for many years, likely many decades, and the advent of AI self-driving cars will occur while there are still human driven cars on the roads. This is a crucial point since this means that the AI of self-driving cars needs to be able to contend with not just other AI self-driving cars, but also contend with human driven cars. It is easy to envision a simplistic and rather unrealistic world in which all AI self-driving cars are politely interacting with each other and being civil about roadway interactions. That’s not what is going to be happening for the foreseeable future. AI self-driving cars and human driven cars will need to be able to cope with each other.

For my article about the grand convergence that has led us to this moment in time, see: https://aitrends.com/selfdrivingcars/grand-convergence-explains-rise-self-driving-cars/

See my article about the ethical dilemmas facing AI self-driving cars: https://aitrends.com/selfdrivingcars/ethically-ambiguous-self-driving-cars/

For potential regulations about AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/assessing-federal-regulations-self-driving-cars-house-bill-passed/

For my predictions about AI self-driving cars for the 2020s, 2030s, and 2040s, see my article: https://aitrends.com/selfdrivingcars/gen-z-and-the-fate-of-ai-self-driving-cars/

Returning to the topic of cobots and exoskeletons, let’s consider some aspects about the driving of a car and the use of automation to do so.

One approach to driving a car involves building all of the automation into the car itself, including the sensors to detect the surroundings, and the computer processors to run the AI software that does the driving, and so on. I’ll call this the “AI-integrated” approach.

We might decide that rather than trying to integrate the automation into the car, perhaps we might instead build a robot that can get into and out of the car, akin to a human being, and the robot will be imbued with the ability to drive a car. I’ll call this the “AI-robotic” approach.

For those of you that have never considered the idea of having a robot drive a car, it is worthwhile to take a moment and ponder the matter. Imagine that if you could build such a robot, it could then drive presumably any of the millions of today’s conventional cars. There would be no need to change the design of cars. There would be no need to retrofit existing cars. A car would be a car.

The driving robot would be a robot. When the driving robot gets into the car and sits behind the wheel, you have a completely “backward compatible” approach to automating the driving of cars. Since the robot is sitting there at the wheel of the car, we’d hope and assume that it can fully drive the car. By this I mean to suggest that the robot can’t be only partially proficient in driving a car. It might be fully equivalent to a human driver.

I realize that you could argue that perhaps we might split some of the difference, namely juice up the car so that it has some amount of automation to be able to drive, and then have a robot that also has some of the ability to drive. The car alone cannot drive itself. The robot alone cannot drive a conventional car. Instead, you might come up with a semi-automated car that can be driven by a semi-automated robot. Sure, I suppose that’s a possibility.

You might even suggest that this walking-talking kind of robot might not be fully capable to drive a car and yet have other handy uses anyway. Maybe it can help humans into and out of the car. Maybe it can do chores around your house like cleaning the house and cooking meals. Meanwhile, it can also be somewhat of a chauffeur, but only if the car itself also has some of the proficiency that perhaps we cannot otherwise build into the robot.

For example, the robot might not be equipped with sensory devices like LIDAR, radar, ultrasonic, etc. Those sensory devices could be bulky and cause the robot design to get overly large and cumbersome. Thus, the robot needs to have those capabilities built into the car that it drives.

Once the robot gets into the car, it is able to plug into the AI system of the car and become “at one” with the car. This symbiotic aspect makes us achieve a one-plus-one equals two kind of merger. Each helps the other. When the robot is finished driving, it unplugs itself from the car and gets out of the car, moving along to do whatever other chores or tasks it can do.

Human Driver Steps into an Exoskeleton to Split Driving Task

There’s another approach that also goes beyond today’s usual thinking, namely the use of a cobot and an exoskeleton. In this use case, a human that wants to drive a car gets into a cobot exoskeleton first, and then steps into the car and sits at the steering wheel.

The human contributes certain aspects of the driving effort, while the cobot exoskeleton does other aspects. In one such scenario, the car is a conventional car and all of the driving task is borne by the human wearing the cobot exoskeleton. Another scenario involves splitting the driving task among the human wearing the cobot exoskeleton and having some form of semi-autonomous features built into the car.

We then have these three overall approaches involved:

  •         AI-Integrated Driving: All of the automation built into the autonomous self-driving car
  •         AI-Robotic Driving: All of the automation for driving is built into a robot, the car can be a conventional car or have semi-autonomous features
  •         AI-Human Cobot Exoskeleton Driving: Human and a Cobot work together to drive a car, the car can be a conventional car or have semi-autonomous features

In this latter case, the human is considered in-the-loop of the driving.

I realize that for purists, the notion of keeping the human in-the-loop would seem to undermine the overarching goal of having self-driving cars or at least fully autonomous driving (note that the AI-Robotic driving is not strictly speaking a self-driving car, instead it is a car that is autonomously driven by a robot).

As a short aside, some use the phrase robot car, or the phrase of robo-car and robo-taxi, when referring to a self-driving car. I don’t like using those wordings because they confound the idea of a robot driving a car with the notion of the AI-integrated approach wherein the car drives itself. I realize you might suggest that there’s a “robot” hidden inside the self-driving car and therefore want to call it a robot or a robo-car, but I think that’s an unfortunate confounding. For me, if there really is a robot that is going to step into the driver’s seat, I’m Okay with saying it is a robo-car or robot car or robo-taxi, otherwise, if the AI-integrated approach is being used then I vote for calling it a self-driving car.

For more about the naming of these kinds of cars, see my article: https://aitrends.com/selfdrivingcars/ai-reasons-call-self-driving-cars/

For why this is all a moonshot, see my article: https://aitrends.com/selfdrivingcars/self-driving-car-mother-ai-projects-moonshot/

For some of the crossing the Rubicon challenges, see my article: https://aitrends.com/selfdrivingcars/crossing-the-rubicon-and-ai-self-driving-cars/

For why we maybe should start-over on AI, see my article: https://aitrends.com/selfdrivingcars/starting-over-on-ai-and-self-driving-cars/

Having gotten that terminology conundrum off my chest, let’s get back to the aspect that there are likely self-driving car purists that might have a hefty bit of heartburn about potentially keeping a human in-the-loop of driving a car.

There are several potential reasons why keeping a human in-the-loop might make sense.

First, suppose that after trying and trying to remove the human from the loop of driving a car, AI development and advancements are unable to achieve a truly autonomously driven car. No matter what tricks or techniques are devised and employed, imagine that it just is not feasible to arrive at either a self-driving car of a Level 5 or that there is no means to construct a robot to do so either. What then?

I would suggest we would want to then find a means to keep the human in-the-loop. It could be that we only need the human for edge cases or corner cases of the driving task. This might be dicey in that I’ve already offered many reasons why co-sharing the driving task with humans and automation can be problematic. In any case, the odds are that inexorably we are going to as a society be aiming to increase the autonomous nature of cars and so having a human involved, if that’s the only way to get there, so be it, I suppose.

See my article about the dangers of back-up human drivers: https://aitrends.com/selfdrivingcars/human-back-up-drivers-for-ai-self-driving-cars/

For my article about edge cases, see: https://aitrends.com/ai-insider/edge-problems-core-true-self-driving-cars-achieving-last-mile/

For my article about the boundaries of AI when it comes to the driving task, see: https://aitrends.com/selfdrivingcars/ai-boundaries-and-self-driving-cars-the-driving-controls-debate/

For the nature of a Turing test for AI self-driving cars, see my article: https://aitrends.com/selfdrivingcars/turing-test-ai-self-driving-cars/

Another reason to potentially keep the human in-the-loop of driving might be due to humans insisting that the want to remain in-the-loop.

There are self-driving car pundits that say we must eliminate all human driving on our public roadways if we are going to reach the vaunted life-saving goals of having self-driving cars. Besides my earlier point that you cannot just magically sweep under the rug the millions of existing conventional cars, and I’ve also countered and essentially debunked the claim that we might merely alter our roadway infrastructure to have a two-tiered infrastructure, one for self-driving cars and one for human driven cars, we must also consider the societal question of whether humans will readily and willingly give up their driving privilege.

I know that the pundits would say that certainly people will gladly hand over their driver’s licenses if they knew that by doing so they would save lives. I’d say that’s quite a leap in logic and faith in how people think and are motivated. I realize another angle is that people won’t want to drive once they get accustomed to the grand convenience of being self-driven. Again, I have my doubts that everyone will see things that way.

I suppose it could be that after asking for people to voluntarily stop driving themselves, and to then deal with those that won’t capitulate willingly, there could be a law that makes it illegal for humans to drive. Those holdouts for human driving would then be caught and penalized. Maybe if the culture shifts and we gradually as a society no longer view driving as a kind of “right” and begin to see things differently that you might be able to regulate legally this last remaining “you’ll remove the steering wheel from my dead cold hands” segment of society. All of this seems a very long ways off in the future.

Meanwhile, back to the matter at hand. Let’s assume that for whatever reason you like that there are those that will want to be human drivers or that we might need human drivers to make the “last mile” toward nearly full automation.

In that case, perhaps the AI-Human cobot exoskeleton might be helpful. This blends together the human driving capability with the cobot driving capability and as augmented by the exoskeleton.

The exoskeleton might aid your use of the driving controls. The arms of the exoskeleton augment your arms when using the steering wheel. The legs of the exoskeleton augment your legs when working the brake pedal and the accelerator pedal. The cobot might be controlling the exoskeleton arms and legs, and guiding your arms and legs as appropriate during the driving task.

There you are, sitting in the driver’s seat, wearing your cobot exoskeleton. While driving on the freeway, a car in another lane starts to veer into your lane. Maybe you failed to notice the veering car, but fortunately the cobot did, which then guides your arms to turn the steering wheel to avoid the veering car, along with pushing further on the accelerator pedal to get away from the intruding car. Saved by the cobot exoskeleton driving AI system.

Suppose you go to the company Friday night party and have a bit too much to drink. You get into your car to drive home. You are wearing the cobot exoskeleton when you get into the car (no need to have been wearing it at the company party, unless you are trying to make some kind of fashion statement!). Even though you probably should not be behind the wheel of a car, the cobot exoskeleton ends up doing most of the driving and gets you home in one piece.

You might have some kind of physical disabilities that would normally inhibit your ability to drive a car, and yet the cobot exoskeletion could allow you to drive a car. You might need some cognitive added help when driving a car, and the cobot exoskeletion can do so. Perhaps novice teenage drivers might be required to initially wear a cobot exoskeletion suit to aid in learning how to drive a car. And so on.

For the potential of brainjacking to get the last mile toward self-driving cars, see my article: https://aitrends.com/selfdrivingcars/brainjacking-self-driving-cars-mind-matter/

For the elderly and self-driving cars, see my article: https://aitrends.com/ethics-and-social-issues/elderly-boon-bust-self-driving-cars/

For the singularity, see my article: https://aitrends.com/selfdrivingcars/singularity-and-ai-self-driving-cars/

For the reframing of self-driving cars, see my article: https://aitrends.com/ai-insider/reframing-ai-levels-for-self-driving-cars-bifurcation-of-autonomy/

Conclusion

For many people, the aspect that we might have self-driving cars is already a kind of science fiction story that appears to be coming true. Furthermore, and separately, Cobots on our factory floors make sense. Exoskeletons make sense too, and especially for working in situations involving the physical brute capability that an exoskeleton can provide. We’ve all seen various science fiction depictions of exoskeletons for especially future military applications, such as shown in the movies The Matrix and in Edge of Tomorrow.

Does it make sense to consider having cobot exoskeletons?

And if so, would it further make sense to have ones that can help humans drive a car?

Seems like a rather radical notion.

Right now, the path appears to be the emergence of the AI-Integrated self-driving car first and foremost, and then maybe the longshot would be an AI-Robotic self-driving car, if we otherwise cannot achieve the AI-Integrated approach. The melding of a human driver and a cobot exoskeleton suit to do driving does not appear to be on any near-term horizon per se, but I don’t think we can reject it entirely and just dismiss it fully out-of-hand.

For the time being, I’d vote that we keep our eyes open to possibilities that might seem outrageous right now, since we might need to find alternatives or want alternatives further down-the-road. I’m thinking about making a cobot exoskeletion that could allow me to drive like a NASCAR driver, and if so, don’t be surprised when you see me at the winner’s circle of the Indy 500. That will be me waving, along with my cobot exoskeleton driver’s suit.

Copyright 2018 Dr. Lance Eliot

This content is originally posted on AI Trends.

 



Source link

Related posts

Neato unveils Botvac D4 Connected and Botvac D6 Connected robot vacuums

satoshi

UW researchers and Florida middle school students form unusual bond over cosmic kidney stones – GeekWire

satoshi

Multi-Sensor Data Fusion (MSDF) and AI: The Case of AI Self-Driving Cars

satoshi

Crossing the Rubicon and AI Self-Driving Automobiles

satoshi

Are the leaders of today ready for the workforce of tomorrow?

satoshi

Robotics and Internet of Things innovations get funding boost from Marine Institute – Irish Tech News

satoshi