Researchers from Seoul National University have developed a deep learning framework to improve the skills of a robotic sketch agent

0

The main research goal of this article was to develop something cool with non-rule-based techniques such as deep learning; they thought drawing is cool to display if drawing performance is taught to a robot instead of a human. Recent advances in deep learning have produced stunning artistic results, but most of these techniques focus on generative patterns that create whole pixels at once.

Deep learning algorithms have recently produced amazing results in various fields, including the arts. In fact, a large number of computer scientists around the world have built models capable of successfully producing artistic works, such as poems, paintings and sketches.

Source: https://arxiv.org/pdf/2208.04833.pdf

A new artistic deep learning framework has recently been unveiled, intended to improve the capabilities of a drawing robot. Their method, described in a paper presented at ICRA 2022 and pre-published on arXiv, allows a drawing robot to simultaneously learn motor control and stroke-based rendering.

The researcher has developed a framework that views drawing as a sequential choice process rather than creating a generative model that creates artistic works by generating certain pixel patterns. This methodical procedure is similar to how people gradually build up a sketch using individual lines drawn with a pen or pencil.

For a robotic sketch agent to create sketches in real time while using an authentic pen or pencil, the researchers planned to apply their architecture to it. While other teams developed deep learning algorithms for “robot artists”, these models frequently required large training datasets of sketches and drawings and inverse kinematic techniques to teach the robot to use a pen and draw with it.

On the other hand, there were no examples of actual drawings used to teach the framework. Instead, he can independently create his own drawing techniques over time by learning from his mistakes.

Source: https://arxiv.org/pdf/2208.04833.pdf

Furthermore, the researcher claims that “their framework does not use inverse kinematics, which makes the robot’s movements somewhat rigid; instead, it also allows the system to create its own movement hints to make the movement style as organic as possible. In other words, unlike how most robotic systems typically work, it directly moves its joints without the need for primitives.

Upper and lower class agents are two “virtual agents” that are part of the research team paradigm. The role of the upper class agent is to learn innovative drawing techniques, while the lower class agent learns efficient moving techniques.

Before being paired and only after completing their respective training, the two virtual agents underwent independent reinforcement learning training. The researcher put their combined performance to the test using a 6-DoF robotic arm with a 2D gripper in a series of real-world tests. The results of these first tests were very positive as the robotic agent was able to appropriately draw specific photos by the algorithm.

This Article is written as a research summary article by Marktechpost Staff based on the research paper 'From Scratch to Sketch: Deep Decoupled Hierarchical Reinforcement Learning for Robotic Sketching Agent'. All Credit For This Research Goes To Researchers on This Project. Check out the paper and reference article.

Please Don't Forget To Join Our ML Subreddit


I am a trainee consultant at MarktechPost. I am majoring in mechanical engineering in IIT Kanpur. My interest lies in the field of machining and robotics. Also, I have a keen interest in AI, ML, DL and related fields. I am a technology enthusiast and passionate about new technologies and their concrete applications.


Share.

Comments are closed.