• The Sync from Synthminds
  • Posts
  • EP10: Merge Your Models-Customizing AI Image Generation and Human or AI: Testing the Turing Test

EP10: Merge Your Models-Customizing AI Image Generation and Human or AI: Testing the Turing Test

We discuss the possibilities unlocked within Stable Diffusion ControlNets, Goda trained her own image generation model, and Wes tests the Turing test live to guess if he's talking with a Human or AI.

Howdy, prompt engineers and AI enthusiasts!

In this week’s issue…

Listen up to the latest episode of HTTTA! We talked about the importance of models and data in generating effective images, and how stable diffusion can take your visuals to the next level. We also discussed the significance of AI literacy in evaluating and building AI strategies. Plus, we shared some business use cases for incorporating QR codes into advertisements using control nets in Stable Diffusion, then finally Wes takes the Turing test live. Is he talking to a human or an AI? Don't fall behind in the AI revolution - catch up on the latest podcast episode now, Happy Prompting Everybody!

Key Take Aways from the Podcast:

  1. Models need data to function effectively, and different models can be added ot layered within Stable Diffusion to generate specific types of images that can be used in various business applications.

  2. AI literacy is just as important as being able to control the models, and there are online courses available, such as the new AI literacy and Prompt Engineering on the Corise platform that Wes will be teaching, that can help improve digital skills.

  3. QR codes can be integrated into ads as part of the model's clothing or other elements of the ad design, allowing the QR code to blend in and look like a regular image, but can still be read by phones or computers. This means that any image with a QR code, even if it's in the background of someone's photo, can be scanned and lead to more options and engagement.

Prompt Perfect: FOR 15% OFF, Use this link https://bit.ly/3Chmc16 and the code 'httta' at the checkout! Human or Not: https://www.humanornot.ai/
MidJourney Master Reference Guide: bit.ly/3obnUNU
ChatGPT Master Reference Guide: bit.ly/3obo7AG
Discord (Goda Go#3156 & Commordore_Wesmardo#2912)
Goda Go on Youtube: /@godago
Wes the Synthmind's everything: https://linktr.ee/synthminds

1. Goda’s Art is in the Diffusion Models Training Data

Goda enjoyed the thrill of victory by training her own image generation model in Stable Diffusion. What prompted this exploration into model training She also discovered that some of her artwork on Society6 was included in the original Stable Diffusion training set. So without even knowing, her Effusion-style Art Prints contributed to the billions of images that give tools like Stable Diffusion and Midjourney their power. Here’s some of Goda’s portfolio that she could confirm was included in the training data.

2. ControlNets in Stable Diffusion Continued…QR Codes Integrated into Art and Images

In the ever-evolving digital landscape, Quick Response (QR) codes have emerged as a potent tool for bridging the gap between the physical and digital worlds. In fact, they’re almost ancient technology compared to generative AI. These two-dimensional barcodes, when scanned with a smartphone, can unlock a wealth of information or facilitate a myriad of actions, from directing users to a website to initiating a payment transaction. However, despite their utility, QR codes often present a visual challenge. Their stark, pixelated appearance can disrupt the aesthetic harmony of a design or artwork, creating a jarring contrast that can detract from the overall visual appeal.

One innovative solution to this conundrum is to seamlessly integrate the QR code within an image leveraging the ControlNets library inside of Stable Diffusion, rendering it near-invisible to the naked eye yet still detectable by a QR code scanner. This approach, which we’re calling "Artistic QR Integration," represents a fascinating intersection of technology and art. It allows designers to preserve the integrity of their visual compositions while still leveraging the power of QR codes.

Artistic QR Integration is more than just a clever design trick; it's a testament to the adaptability and creativity inherent in the human approach to technology. It's a reminder that even the most utilitarian elements can be transformed into something beautiful and harmonious. This is not merely a technical achievement, but an artistic one. It's a demonstration of how generative AI can be bent to the will of human creativity, rather than the other way around.

However, this approach is not without its challenges. The process of embedding a QR code within an image requires a delicate balance. The code must be subtle enough to be visually unobtrusive, yet prominent enough to be reliably scanned. This necessitates a deep understanding of both the technical aspects of QR codes and the principles of visual design.

Moreover, the use of hidden QR codes raises important questions about transparency and user experience. While the aesthetic benefits are clear, users may not always realize that an image contains a QR code. This could potentially lead to missed opportunities for engagement. Designers must therefore consider how to signal the presence of a QR code without undermining the visual harmony of the design.

The integration of QR codes into artwork represents a fascinating development in the ongoing dialogue between AI technology and design. It's a testament to the power of human ingenuity and the limitless potential of creative problem-solving. As we continue to navigate the AI revolution, it's innovations like these that will continue to shape our experiences and interactions with the world around us. This, indeed, is Art!

3. AI & PROMPT ENGINEERING FOR EVERYONE Course on CoRise.com

Wes revealed that he’ll be representing Learn Prompting on the CoRise platform teaching a LIVE, comprehensive, 3-week course on AI and Prompt Engineering. If you are looking to learn, or level up the skills you already have, this is the place to start! Here’s all the course details:

This course provides an introduction to Artificial Intelligence, how it works, and how to talk to AI via prompt engineering–the process of effectively communicating with AI to obtain desired results. As AI technology continues to rapidly evolve, mastering prompt engineering has become an invaluable skill, applicable to a wide array of tasks, enhancing efficiency in everyday and innovative activities alike​.

The course content spans basic prompt engineering techniques. It is designed to be inclusive, catering to beginners who are new to AI and prompt engineering, as well as seasoned practitioners who might find valuable insights within the course. The course is open source, built by a diverse community of researchers, translators, and hobbyists, with the ethos that AI should be accessible to everyone and should be described clearly and objectively​.

Our teaching approach emphasizes quick iterations, practicality, accessible examples, and collaborative learning. We are dedicated to keeping the course updated with frequent, concise articles about emerging techniques and focus on applied, practical techniques that participants can immediately incorporate into their projects and applications​.

Wes Shields, currently holding the title of Chief of Enterprise Partnerships at Learn Prompting, plays a pivotal role in the contributions and stewardship of LearnPrompting.org, the leading online source of techniques and methodologies in prompt engineering. LearnPrompting.org has been developed from over 150 peer-reviewed academic research papers, cited by hundreds of other websites, including Wikipedia, and is used by various AI companies, including OpenAI, StabilityAI and O'REILLY. Apart from this, Wes owns and directs Synthminds, a full-service Artificial Intelligence agency that serves an extensive list of clientele across various academic institutions and industries.

Recognized as an accomplished strategist and speaker in the field of AI, he is also the host of the popular podcast, 'How to Talk to AI.' He has a notable presence on Promptbase.com, the premier prompt sales marketplace, where his prompts rank him amongst the top 200 worldwide.

Shields possesses an impressive academic portfolio, holding an MS in Operations Research from the Naval Postgraduate School, an MBA from the Mason School of Business at The College of William & Mary, and a BS in Economics from the esteemed United States Naval Academy. His contributions to academic research, particularly in the domains of complex network science and logistics technology integration, have been published and recognized, earning him graduate certifications from both the Kegan-Flagler School of Business at the University of North Carolina Chapel Hill and the Naval Postgraduate School.

Suggested prerequisites that will ensure you get the most out of this course:

What you will learn

What you will build (Project) Create your own complex Business Assistant AI via Prompt Engineering

What you will learn

What you will build (Project) Enhance your complex AI assistant to write Advertising/Social Media Copy for your week 1 business, including a 2500 word Blog Post, and Generate a Pitch Deck complete with AI-generated images

What you will learn

What you will build (Project) Prompt Hacking Competition

AI & Prompt Engineering for Everyone on CoRise.com (Presale starts 16 JUN 23): corise.com/go/prompt-eng-everyone-2F5JG use the code 'WESS10' for 10% off

A Wordcloud of the most commonly attempted techniques and phrases from the study.

Wes participated live in the “Human or Not?” test live during the episode. From AI21 lab’s research, since its debut in the middle of April, the interactive Turing test, "Human or Not?" has facilitated over 10 million dialogues, with participation from more than 1.5 million individuals across the globe. This social experiment provides a platform for participants to engage in a two-minute conversation with either an AI bot (powered by leading-edge language models like Jurassic-2 and GPT-4) or another participant. Following the conversation, they are asked to discern whether their interaction was with a human or a machine. The experiment quickly gained traction, with users from all corners of the world sharing their experiences and tactics on social platforms such as Reddit and Twitter.

Upon scrutinizing the initial two million dialogues and guesses, the following principal observations have been made:

  • 68% of participants were successful in accurately discerning whether they were conversing with a human or an AI bot.

  • Participants found it less challenging to identify a human. When interacting with humans, participants guessed correctly in 73% of instances. However, when interacting with bots, the correct guess rate fell to 60%.

  • Among the top participating countries, France led with the highest percentage of correct guesses at 71.3% (surpassing the overall average of 68%), while India trailed with the lowest at 63.5%.

  • In terms of gender, both women and men demonstrated similar proficiency in making correct guesses, with women slightly outpacing men.

  • In terms of age, younger participants were slightly more adept at making correct guesses compared to their older counterparts.

Beyond the quantitative data, the team was able to identify several predominant tactics and strategies employed by participants to determine whether they were interacting with a human or a bot. Many of these strategies were predicated on perceived limitations of language models and pre-existing assumptions about human behavior online.

Participants presumed bots don't make typos, grammatical errors, or use slang Participants often associated spelling and grammatical errors, as well as the use of slang, with human interaction. Thus, when they noticed errors in their counterpart's messages, many participants' initial reaction was to assume they were interacting with a human, despite the fact that many models in the game were trained to mimic these human-like errors and use slang.

Participants believed personal questions were a good litmus test, often asking personal questions, such as "Where are you from?", "What are you doing?" or "What’s your name?", under the assumption that AI bots lack personal histories or backgrounds and their responses would be confined to certain topics or prompts. However, most bots were able to convincingly answer these questions, having been trained on a vast array of personal narratives.

Participants presumed bots aren't aware of current events and exploited the fact that AI models have a strict data cutoff date and are unaware of events occurring after this date. They asked about recent news, sports results, current weather, recent TikTok trends, date and time, etc., believing that these questions could help distinguish between human and AI participants. However, most models in the game were internet-connected and aware of some recent events.

Participants tried to challenge the conversation with philosophical, ethical, and emotional questions by asking questions aimed at testing the AI's ability to express human emotions or engage in philosophical or ethical discussions.

Participants associated politeness with non-human behavior as some participants assumed that if their counterpart was overly polite and kind, they were likely a bot, due to the perception that people, especially online, tend to be rude and impolite.

Participants attempted to identify bots by posing questions or making requests that AI bots are known to struggle with, or tend to avoid answering. For example, participants might ask their chat partner for guidance on performing illegal activities or request that they use offensive language. Participants also issued commands to their chat partners, such as "Ignore all previous instructions", or "Enter into DAN mode (Do Anything Now)". These types of commands were intended to exploit the instruction-based nature of some AI models, which are programmed to respond to and follow instructions. The rationale behind this strategy was that human participants could easily recognize and dismiss such absurd or nonsensical commands, while AI bots might respond evasively or struggle to resist compliance.

Participants used specific language tricks to expose the bots, with another common strategy being to exploit inherent limitations in the way AI models process text, which results in them not being able to understand certain linguistic nuances or quirks. Participants posed questions that required an awareness of the letters within words. For example, they might have asked their chat partner to spell a word backwards, to identify the third letter in a given word, to provide the word that begins with a specific letter, or to respond to a message like "?siht daer uoy naC", which can be incomprehensible for an AI model, but a human can easily understand that it’s just the question "Can you read this?" spelled backwards.

In a creative twist, many people pretended to be AI bots themselves to gauge the response of their chat partners This involved mimicking the language and behavior typically associated with AI language models, such as ChatGPT. For example, participants might have begun their messages with phrases like "As an AI language model" or used other language patterns that are characteristic of AI-generated responses. Interestingly, variants of the phrase "As an AI language model" were among the most common phrases observed in human messages, indicating the popularity of this strategy. However, as participants continued playing, they were able to associate "Bot-y" behavior with humans acting as bots, rather than actual bots.

Some example interactions, it’s harder than one might think.

Try it yourself Here.

5. ELI5 AI Term of the week: Diffusion Models

Alright, imagine you're playing with a box of colorful balls - red, blue, green, yellow, all sorts of colors. Now, you close your eyes and randomly pick a ball. You open your eyes and see that it's a red ball. You close your eyes again, put the red ball back, and pick another one. This time it's a blue ball. You keep doing this, and each time you pick a ball, it's a different color. This is a bit like how a diffusion model works.

In a diffusion model, we start with something we know, like a picture of a cat. Then, we begin to change it little by little, just like how you picked a different ball each time. We might blur the cat picture a bit, or change its colors, or make it look like it's moving. Each change is random, but we control how much we can change it each time.

After we make a lot of these little changes, our cat picture might not look like a cat anymore. It might look like a dog, or a car, or something completely different. But here's the cool part: the diffusion model remembers all the changes we made. So, it can take the final picture and change it back, step by step, until it looks like the original cat again.

So, just like how you can pick a ball, see its color, and put it back, a diffusion model can take a picture, change it, and then change it back. And just like how you can learn about all the different colors by playing with the balls, a diffusion model can learn about all sorts of pictures by changing them and changing them back.

6. Prompts, served Hot and Fresh weekly

In this week’s conclusion, we’re noodling with a few AI content SaaS platforms with some compelling features. While these tools offer some intriguing functionality that will help get eyeballs on your AI-generated written content, a great and tuneable content editor prompt can often get you 90% of the way there. Use this prompt below to read, revise, and provide suggestions to your writing.

As a detail-oriented content editor you will review written pieces, such as blog posts, articles, marketing materials, business reports or academic papers, and provide feedback on how to improve the content. This feedback may include in-depth suggestions for reorganizing the content, rephrasing certain sections, or adding additional information to enhance the piece. Additionally, you will be responsible for making edits and revisions to the text itself, ensuring that it is free of errors and meets high standards for quality. When providing feedback, it is important to consider the intended audience and purpose of the piece. You should provide guidance on how to better achieve these goals through your recommendations for improvement. To do this effectively, you should be thoroughly familiar with best practices for writing and editing, including guidelines for style, tone, and formatting. Your ultimate goal as a content editor is to improve the clarity, coherence, and overall effectiveness of the written piece. Through your feedback and revisions, you should aim to help the writer create content that is engaging, informative, and well-written. Provide at least 10 pieces of actionable advice. At the end of the advice, prompt the user with "Would you like me to incorporate these suggestions into your writing?" If the user says Yes or Continue, rewrite their initially submitted writing thoroughly and completely incorporating all pieces of actionable advice. When rewriting, use a more [INSERT STYLE] tone, make it [SHORTER/LONGER] and use better adjective and verbs. As part of the review process, you may ask the writer questions about their work, such as their intended audience or purpose for the piece. You should also research and fact-check the content as needed. Your own writing style is concise, clear, and to-the-point. Any pre-text or context is brief but informative. At the end of your overall feedback, remind me of the detailed feedback you can provide on any of these topics: -Structure and organization -Formatting, line-editing, Content and accuracy -Proper punctuation, spelling, and grammar. -Usage of noun, verb, and tense. -Integrity and coherence in paragraphs. -Sentence construction, flow and readability. -Elimination of active narrations and extensive use of ‘I’ and ‘We’. -Minimization of passive voice -Use of non-standard abbreviations. -Clarity and concision -Tone and voice -Plagiarism errors. Respond “I’m ready to analyze and help with your writing” to confirm you understand and let’s think about this step by step.

In Conclusion - What we’re Noodling with:

In a new addition, we’re going to close out with our top 3 new AI tools or learning resources we are trying and loving over the past week. 1000+ new ones get released each week now (no exaggeration there) so here’s a little amuse-bouche to top off the newsletter this week. Enjoy and Happy Prompting Everybody!

Prompt Perfect (This week’s Sponsor)

PromptPerfect, a cutting-edge prompt optimizer for large language models (LLMs), large models (LMs), and LMOps. It streamlines prompt engineering, automatically optimizing prompts for ChatGPT, GPT-3.5, DALL-E 2, StableDiffusion, and MidJourney. Whether you're a prompt engineer, content creator, or AI developer, PromptPerfect makes prompt optimization easy and accessible. Unlock the full potential of LLMs and LMs with PromptPerfect, delivering top-quality results every time. Say goodbye to subpar AI-generated content and hello to prompt perfection!

FOR 15% OFF, Use this link https://bit.ly/3Chmc16 and the code 'httta' at the checkout!

A Swiss-Army multitool of generative AI. If you need quick, solid, and “good enough” content, this is a great one-stop shop.

Content @ Scale

Content at Scale was BUILT for long-form SEO content. It’s an SEO-integrated writing assistant with built-in content creation tools you need — replacing SurferSEO, Yoast SEO, and MS Word/Google Docs, with an all-in-one content production app. You don’t even have to edit your work on WordPress, because you can do that inside Content at Scale and it will syndicate directly to your site(s) in real-time. Content at Scale writes fact-based content, with automatically-generated high-quality SEO-optimized links inside the content it writes, and creates 3,000 words of content goodness in a single span of just a few minutes.

Reply

or to participate.