3/7

3/5

2/29

Explore the world of Photoshop Generative AI and create a before and after . . .

2/16

1/10

Create a graph showing your next four possible project . . . Open Format, something like this . . . 

11/13

What is Genius Hour?

Carlos Baena - Pixar Animator

Pixar "Genius Hour" example. Carlos Baena

Carlos Baena's Demo Reel

10/26

I am sorry but I am out today for a district meeting and everything seems to fall on a GREEN DAY.  Today you are to:

 Once you are done, please post in your portfolio. 

10/13

9/11

Will we ever watch  an AI generated character grow up?

AI Learns to Walk (deep reinforcement learning)

In every “AI learns to walk” video I’ve seen, the AI either learns to walk in a weird, non-human way, or they use motion capture of a real person walking and simply train the AI to imitate that. I thought it was weird that nobody tried to train it to walk properly from scratch (without any external data), so I wanted to give it a shot! That’s what I said 4 months ago. It’s been really difficult, but I’ve finally managed to do it, so please watch the whole video! The final result ended up being awesome :) 

NOTE: From the last video, you guys made it clear you didn’t like that Albert had his brain reset, so from now his brain is here to stay (hopefully)! The next video I make with Albert will start with the brain we trained in this video, so with every video Albert will become more and more capable until he eventually learns to break out of my computer and take over the world. You also can only see one Albert, but there are actually 200 copies of Albert and the room he’s in training behind the camera to speed up the training. 

If you want to learn more about how Albert actually works, you can read the rest of this very long comment I wrote explaining exactly how I trained him! (and please let the video play in the background while reading so YouTube will show Albert to more people) 

THE BASICS I created everything using Unity and ML-Agents. Albert is controlled entirely by an artificial brain (neural network) which has 5 layers, the first layer consists of the inputs (the information Albert is given before taking action, like his limb positions and velocities), the last layer tells him what actions to take and the middle 3 layers, called hidden layers, are where the calculations are performed to convert the inputs into actions. His brain was trained using the standard algorithm in reinforcement learning; proximal policy optimization (PPO). 

For each of Albert’s limbs I’ve given him (as an input) the position, velocity, angular velocity, contacts (if it’s touching the ground, wall or obstacle) and the strength applied to it. I’ve also given him the distance from each foot to the ground, direction of the closest target, the direction his body’s moving, his body’s velocity, the distance from his chest to his feet and the amount of time one foot has been in front of the other. As for his actions, we allow Albert to control each body part’s rotation and strength (with some limitations so his arm can’t bend backwards, for example). 

Just like the last videos, Albert was trained using reinforcement learning. For each of Albert's attempts, we calculate a score for how 'good' it was and make small, calculated adjustments to his brain to try to encourage the behaviors that led to a higher score and avoid those that led to a lower score. You can think of increasing Albert’s score as rewarding him and decreasing his score as punishing him, or you can think about it like natural selection where the best performing Alberts are most likely to reproduce. For this video there are 13 different types of rewards (ways to calculate Albert's score), we start off with only a couple and with each new room add more, always in an attempt to get him to walk. 

REWARD FUNCTION 

Room 1: We start off very simple in the first room, we reward him based on how much he moved to the target and we punish him for moving in the wrong direction. This led to Albert doing the worm towards the target, since he figured out that was the easiest way for him to move the quickest/get the highest score. It would have been possible to get Albert to walk in a janky way by just rewarding him for moving towards the target and also punish him for falling as a team at Google (DeepMind) showed in 2017, but I thought it would make for a more enjoyable video if he starts off with the worm and over time learns to use his legs, rather than immediately being able to partially walk. 

Room 2: In the second room we start checking if his limbs hit the ground. If the limb that hits the ground is a foot we reward him (but only if it's in front of his other foot, more on that later), if it isn’t, we punish him. I also made it so Albert wasn’t rewarded at all unless his chest was high enough to force it to at least be partially standing. As seen in the video, this encourages him to not fall over and encourages him to use his feet to do it. We also introduced a new reward designed to encourage smoother movement; if he approaches the maximum strength allowed on a limb he's punished, and he's rewarded if he uses a strength of almost 0. This encourages him to opt for the more human-like movement of using a bit of strength from many limbs as opposed to a lot of strength from one limb. 

Room 3: This is where we start to polish Albert’s gait that developed in room 2 and teach him to turn. From here on we start using the chest height calculation as another direct reward where the higher his chest is the more he’s rewarded in an attempt to get him to stand up as straight as possible. These rewards so far give Albert a decent gait, however he’s still not using both of his feet (which was by far the hardest part of this project), so room 4 is designed to do exactly that. 

Room 4: We get Albert to take more steps from a few additional rewards. To start, we introduce a 2 second timer that resets when one foot goes in front of the other. We reward Albert whenever this timer is above 0 (the front foot has been in front for < 2 seconds), and we punish him whenever the timer goes below 0 (the front foot has been in front > 2 seconds). We add another reward proportional to the distance of his steps to encourage him to take larger steps. To smooth out the movement, we also add a punishment every frame proportional to the difference in his body’s velocity from the previous frame to the current frame, so if he’s moving at a perfectly consistent velocity he isn’t punished at all, and if he makes very quick erratic movements he’s punished a lot. 

Room 5: For the final room the only change I made to the reward function was to go back to an earlier version of a reward. Throughout the other rooms I had been tinkering with how I should reward Albert’s feet being grounded, my initial thought was to only reward the front foot for being grounded to try to get him to put more weight on his front foot when taking steps, but somewhere along the way I changed it to just rewarding Albert for any foot being grounded, and that was the version Albert trained with in rooms 3 and 4. For this final room I switched back to the old front foot grounded reward which resulted in a much nicer looking walk. Also, the video makes it seem like I never reset Albert’s brain, that isn't entirely true, I had to occasionally reset it because of something called decaying plasticity. 

OTHER Decaying plasticity was a big issue. Basically, Albert’s brain specializes a lot from training in one room, then training in the next room on top of that brain is difficult because he first needs to unlearn that specialization from the first room. The best way to solve the issue is by resetting a random neuron every once in a while so over time he “forgets” the specialization of the network without it ever being noticeable, the problem is I don’t know how to do that through ML-Agents. My solution was to keep training on top of the same brain, but if Albert’s movement doesn’t converge as needed I record another attempt trained from scratch, then stitch the videos together when their movements are similar. If you know how to reset a single neuron in ML-Agents please let me know! The outcome from both methods is exactly the same, but it would be a smoother experience having the neurons reset over time instead of all at once. 

For rooms 1 to 4 I only allowed Albert to make a decision every 5 game ticks, but for the final room I removed that constraint and let him make decisions every frame. I found if Albert makes a decision every game tick it’s too difficult for him to commit to any proper movements, he ends up just making very small movements like slightly pushing his front foot forward when he should be taking a full step. The 5 game tick decision time forces him to commit to his decision for at least 5 game ticks so he ends up being more careful when moving a limb. When I recorded him beating the final room I removed this limitation because he’s already learned to commit to his actions so allowing him to make a decision every tick just results in a smoother motion. 

If you’re still reading this thank you for being so interested in the project! I’d like to upload much more often than once every few months, and to do that I need some help. I have 2 part time positions open, one for a Unity AI Developer and one for a Unity Scene Designer. It would start off as part time (paid per project) but I’d love to get someone full time provided they’re skilled enough:) If you think you’d be able to help, please apply here for the AI Developer position: forms.gle/rExRJCKcxNmxnBRu5 and here for the Scene Designer position: forms.gle/VafZTMZ8QMruSBiRA I’ve hidden these job postings in this long pinned comment to make sure anybody who applies is interested enough in the videos to actually read the whole comment, so thank you for reading all the way through!:D Also if you have any ideas for how to improve the AI (or solve the issue of decaying plasticity with ML-Agents), include the text "Technical idea" in your comment so I can find it easier! 

Thank you so much for watching, this video took me 4 months to make, so please, if you enjoyed it or learned something from it, share it with someone you think will also enjoy it! :) 

9/11

Make sure you publish your Project Direction Board.

9/7

Create an Inspiration Board of 4 possible directions of possible projects.

9/5

8/31

Practiced carving a curved spherical object with a base.  

Watched Most

Skipped:

Chisel Examples

Watched Most

Skipped:

8/25

Seniors Gone

Met with Junior IB students regarding their first screen.

8/21

This is where we are going . . .

Practiced carving a curved spherical object with a base.  

Watched Most

Skipped:

Chisel Examples

Watched Most

Skipped:

8/17

Why Study Art?

What are the artists and designers that you LOVE.  Show me your favorite art!

8/17

5/5

Max Hunt - Furniture Designer | MAKERS WHO INSPIRE

5/5

Final Project Workday

5/1

Final Project Workday

4/27

Final Project Workday

4/27

Final Project Workday

2/16

2/14

2/8

2/6

1/30

1/23

THIS IS DUE MONDAY!!! 1/23

This is weighted  (1X)

This is weighted  (2X)

What aer the negatives and positives of AI art discussion.

Rene Descartes "I think therefore I am"

1/19

IB

1/17

1/10

IB

11/29

So You Wanna Make Games?? | Episode 2: Concept Art

11/7

10/25

10/3

End of the Quarter Project Check: 10/14


9/19

9/15

Who is Bansky?

8/22

Student updated their goals for this year.  New students created their portfolio.

"And where I excel is ridiculous, sickening, work ethic. " ~Will Smith

8/19

From 3D Model to Foam Pattern | Cosplay Tutorial
Gnomon 2021 Student Reel