Does a robotic painting machine have to use a vision system?

Not long ago, I was discussing the challenges of making a semi-autonomous painting robot. I brought to the table my experience in the car wash industry, which I believe is valid and relevant to create a device of this type. In fact, many years ago, I suggested that there needed to be a commercially viable painting robot on the market and that the person who carried out this innovation would make a lot of money.

An acquaintance of mine suggested that a vision system would be needed to really create a working system. Well yes, I agree that vision sensors make sense, and I know that there are many very simple operating systems that include optical flow sensors, which are less expensive than robotic sonar or lidar components.

Plus, with a video + optical flow sensor, you can also do telerobotic work and then use your video feeds to adjust your software and various degrees of freedom on your robotic arm. Granted, paint quality control requirements lend themselves well to a vision system.

In fact, I asked my acquaintance; “Are you using some artificial intelligence? If so, you could wire this thing up so that tele-robotic painters would sit at a computer and paint, and when they do a spray pass they like, they could input it as ‘OK’ and when they didn’t, they could press “Clear Data” and then this way I would learn.

Another key point would be the types of spray tips and whether it is possible to use commercially available paint tips. With the right nozzles and an artificial intelligence system, the vision system would be the icing on the cake. There seems to be some good options of robotic systems used in the robotic bomb units that the police often use, I would imagine they are available all over the world, so those standard systems are well designed for their line of innovation.

So it seems that my acquaintance could be the shining star that knocks on the door of discovery here, and may be the one to bring this great invention into the world, that will make the world a better and more pleasant place.

What about the robotic arm of the device? Could we use an arm often found on automated TV satellite dishes, so it has to be set for azimuth, bias and elevation? So you currently have 3 axis, is this correct? Perhaps we could use the same software that would also work from that type of device to find your mark and then move to the position to start? Once it starts, can we use the software to guide it based on data coming from the sensors set up with the vision system that would have optical recognition to autonomously spray things?

It seems to me that soon, very soon, someone will figure this out and create a superb autonomous painting robot, and I personally can’t wait for that to happen. Please consider all this and think about it.

about author

admin

[email protected]

Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.

Leave a Reply

Your email address will not be published. Required fields are marked *