Our current reception of how AI has been generating the fried egg images is playful, intentional, and shocking at the same time. At first glance it seems like the process of producing food images is pretty thought out (we’ve tried other foods). It wants to select the correct lighting, table setting, mood, and background. Things such as a cutting board, plate, or countertop seem just as important to the Ai as much as the egg itself. It's our opinion that there is a level of reasoning going on.
Upon closer inspection, Ai seems to accurately reproduce the yolk, the egg white, as well as the fried crispy edges. It knows the difference between a raw and cooked consistency. Sometimes it's comical to see it over frying eggs and misrepresenting what fried edges looks like. Even the proportions of the yolk versus the egg white occasionally get thrown off.
Fried texture both does and doesn’t seem to be a priority. We’ve looked closely at the texture of the fried areas and it doesn’t seem consistent. Some edges seem hyper realistic and some seem like fried breading or batter. Mostly after repeated prompts it's mostly producing what we want and what we're looking for with no complaints. But sure enough within a larger set of about 10 to 20 runs it will produce something off-beat. How much nuance is going on seems to be inconsistent.
After entering “broken egg” which was a new series we wanted to play with, Ai seemed to fumble around with the results. No matter if we were looking for a cracked egg, broken egg, or an open egg, it would underdeliver on the results.
What is Ai referencing during the prompts? How much machine learning is going on in the background? What will happen after long sessions of repeated prompts? We will keep posting the egg series results up on our instagram.
Everything we run is offline using local Nvidia hardware and machines. Any audio transcriptions and text to speech is done locally as well. Everyday is a new day with the tools we use.
Our subject matter revolves around food - we feel this is low enough on the priority scale to not cause alarm with any future agencies or governance. Again, we have a lighthearted opinion on food and Ai.
We do not alter code or train Ai to create code regarding our prompts or results.
All input is text only. All output is either text, photos, or text to voice generation.
All inputs are done on early releases.
Our social media and website content is not directly connected to other websites or tools.
None of our results or content are intended to provide nutritional, philosophical, or life advice. We do not input or output any images, videos, or results of people.
In general we feel taking this simple approach is representative of how the general public will first experience and intersect with the first iterations of publicly accessible Ai systems.
AI alignment is an important area of research within the field of artificial intelligence. AI alignment refers to the idea that the goals and behaviors of artificial intelligence should be aligned with the values and goals of human beings, and that AI should be designed to operate safely and ethically.
Many researchers and experts in the field of AI are working on developing methods and techniques to ensure that AI systems remain aligned with human values and goals. This is an important consideration as AI continues to become more powerful and ubiquitous in our society.
AI alignment is a complex and challenging problem that requires collaboration and multidisciplinary expertise from computer science, philosophy, ethics, psychology, and other fields. The goal of AI alignment is to create AI systems that can safely and effectively work alongside humans, and that are aligned with our values and goals.
One of the main concerns regarding AI alignment is the possibility of unintended consequences that could arise from the use of AI systems. If AI systems are not aligned with human values and goals, they could cause harm or disruption in various areas such as healthcare, finance, transportation, and others.
To address these concerns, researchers are exploring various approaches such as value alignment, reward engineering, and inverse reinforcement learning, among others. These approaches aim to ensure that AI systems behave in ways that are consistent with human preferences and values, and that they are capable of adapting to changing circumstances and goals.
Overall, AI alignment is an important area of research within artificial intelligence, as it seeks to ensure that AI systems are aligned with human values and goals, and that they operate safely and ethically.