The evolution of animation has been remarkable, transitioning from the era of classic works like 'Snow White and the Seven Dwarves' to today's advanced techniques. These early animations were meticulously hand-drawn on paper, with tools like lightboxes enabling artists to see through the pages for frame-by-frame consistency.
The rise of technology has revolutionized animation, introducing computers, cutting edge software, and features like hot keys and onion skinning. With the rise of artificial intelligence, there has been a big debate on the potential impact it may have on the creative process on the the industry as a whole
As someone who has dabbled in both animation and still drawings, I find myself torn. While understanding the concerns about AI in the arts, I also recognize its potential as a powerful tool to streamline the animation workflow. For the sake of this write up, I wanted to take a look at some of the potentially useful additions AI could have in animation, and how it could fix some existing problems.
I’ve always had a preference for 2D animation myself, having grown up watching the classics such as The Lion King, Tarzan, Aladdin, Mulan, and Bambi. This eventually influenced me to try my hand at drawing characters in 2D myself, at which point I fully came to realize how DIFFICULT it was!
Mulan (1998) and The Lion King (1998) By Walt Disney Pictures, and their character sheets
There are many things that one needs to consider in the 2D animation process, which can all affect the quality and more importantly, how long it takes to produce:
There’s a reason why you can immediately notice a stark difference in the character design of say, western animation vs Eastern animation; the more lines and details you add to your character, the more complicated and time-consuming the animating process becomes.
Comparing two of my favorite shows, Attack on Titan (2013 - 2023) and Regular Show (2010 - 2017) and their wildly differing character sheets, with Eren on the left, and Mordecai on the right
A big difference between the styles to the number of frames used, with the west opting to animate on 1s (24fps) vs the east using more 2s (12 fps) and going for much more detailed character design. This can result in some really high highs and really low lows, as the designs can be incredibly detailed, but quite choppy if studio in charge doesn't have enough time or resources to dedicate.
This is a BIG one, and one of the reasons I even wanted to do this project in the first place. With 2D animation, whether you’re hand drawing them or using vectors to trace a sketch, you need to draw the lines for the character.
Any artists out there will immediately understand the struggle of having a rough linework sketch look and feel better than the final line work because of the different volumes, thick vs thin lines, dark areas, etc
This is an example of a character I drew, the rough sketch on the left, and the lineart on the right.
The problem with having very “sketchy” or extravagant linework, is that it results in the character looking jittery and unfinished. This is why you will often see characters in animation have the same line thickness throughout, as it prevents jitteriness and makes the character’s movements feel smoother.
In this demonstration, the ball on the left is drawn without using line correction, and if you look closely, you will notice that it is jittery and not uniform with the other frames, while the ball on the right is consistent throughout!
This one is more specific to me, but a problem nonetheless. I mainly use my iPad for animation, and I’m aware that it instantly reduces my efficiency on projects as opposed to having a desktop-level program to work with, but the convenience is a huge boon.
With this in mind, most iPad apps don’t have vector-based linework options, making the Paint Bucket tool a much less effective option for quickly filling color in. For those who don’t know what this means, allow me to explain
There are two types of line technologies:
In this example, the raster line is on the left, and the vector on the right. As can see a definite difference in line quality between the two as one is much more jagged
Rasterized is pixel-based, meaning that when you draw a line using this tech, the line will be created with pixels, and has a fixed resolution based on the canvas size. It's generally much better for shading and getting realistic designs. However, this means that zooming into a rasterized line will result in it looking grainy and jagged.
Vector-based lines are made using mathematical formulas, which results in lossless quality, due to the line constantly recalculating based on canvas size and zoom levels. It's generally worse for still images, but if you’re looking for consistency, especially in movement, it makes life SO much easier.
In this demonstration, the vector lines can be erased at once, as the line is one single unit, whereas the raster lines on the right side are pixel based, so they have to be individually erased pixel by pixel.
With this in mind, in apps that primarily use raster-based lines, the paint bucket tool, which is the tool that is most commonly used for filling color into all areas of the drawing, will have trouble filling in consistently due to the INCONSISTENCY of the raster-based lines.
This is the single biggest “issue” when it comes to 2D animation. I am confident that I am not alone in this, but I usually have a grand idea of a long fancy animation when I’m brainstorming, but drawing out said project would probably take multiple lifetimes
As such, animators instead focus on shorts or even small cuts that just show a simple action or something. To do this, they basically “map out” the cut using what are called keyframes.
There are two types of line technologies:
With all the above in mind, let's see how many of these issues can be addressed with artificial intelligence! Given the rate at which AI is advancing, tools like this could be hugely beneficial to the optimization process.
In all animated works, a character designer will create what are called character sheets, which serve as a base that all the other animators will reference when drawing scenes. Why can't an AI help with that?
In this example, the left drawings are supposed to simulate what the animator would draw, and the AI is adding in minor details in afterwards
Providing an AI with the detailed version of the character sheet could be a solution to this, as you could animate the character simplistically, without all the extra lines, and have the AI pass through it afterward and make corrections and additions.
In this image, the idea is to show what an AI could potentially add. Though the additions don't seem too dramatic at first, these little lines add up when you're redrawing the same character hundreds of thousands of times in a large scale production, so efficiency is super important.
With Lineart, this seems like the easiest addition. Tools such as line stabilization exist already in many applications as a way to assist people who may have shaky hands or just want smoother lines.
However, outside of just using vector lines, many artists tend to be more loose/rougher with the linework on the first iteration of a character, so the lines may be sketchy, scratchy, or disconnected.
The goal with this image is to illustrate how you could take a rough sketch to a completed lineart using AI. The left most image represents that sketch, the middle one is the artist's fist pass, trying to keep it looking as close to model as possible, and then having the AI come in at the 3rd stage to clean everything up
With AI, having a built-in way to “smooth” the lines using the character sheet from above, could be an excellent way to fix this issue. If you know what the character is SUPPOSED to look like, and what line thickness you’d like them to be set to, then you can just have the AI rebuild your work with those specifications in mind.
With colors, implementing a tool that would “mark” certain zones to be colored. This would tie into the vector aspect of the program. Having a clearly defined zone labeled through vectors would allow for consistent coloring.
However, once again, let's say you have predefined zones in your character sheet. A jacket is labeled as C1, and a T-shirt is C2.
The idea here, is to basically build out an instruction manual for the AI to go off of. If you have a character design, it generally wont change TOO much, so if you map out the colors there, the AI could theoretically be able to follow that guideline and map out things that look like eyes as E1, or hair to H1, etc.
This is currently the most well-known use case for AI in animation; automated in-betweens. Since you already have the key poses, which are effectively the main essence of an animation, you just want something that resembles, as the name would imply, an in-between frame.
This is something programs like cacani or frame interpolation settings already do, but having this built directly into the process would HEAVILY speed up the workflow and make animation WAY more fun.
Looking through all of this in a vacuum, all these changes seem super effective and sound like a great idea. Unfortunately, a lot of this is still speculation. A lot of my ideas stem from where I THINK artificial intelligence will reach in the future.
The details section for example, which focuses on the idea that we can have the AI pass through the lineart and add details to it like facial wrinkles, a beard, or other tedious details, its a bit more complicated than just "make the AI do it"
What if their face is turned? What if they have different clothes on? What if they just make a face that makes them look like a different character? Or what if the AI just don't recognize them at all and just messes up?
These are all valid concerns that I have no way of addressing unfortunately, at least until we see more of what the AI is capable of. However, to end on a more positive note, with things like Midjourney evolving at the rate they are, there's a good chance my suggestions aren't entirely fiction!