The Sync: Practical, Informative, Strategic AI Perspectives

Dall-E and Claude Release New Features, GPT-4 beats Humans in Persuasion, Hugging Face guide to Open-Source ML

This Week’s Highlights on the Sync!

OpenAI - New Feature for DALL-E: 

Joseph shares details on OpenAI’s update to DALL-E to allow users to make specific changes to AI-generated images by describing changes in text prompts.
Learn more about how to use this new feature

SYNTHMINDS’ VOICES IN AI: 

Eldad and Joe discuss the life and work of Sam Altman, CEO at OpenAI, from his formative years at Stanford and Y Combinator, before taking the helm at the most well-known AI company. What fuels his drive to advance AI technology to unprecedented levels?

🎧 Listen to our latest episode to discover more

📰 News You Need To Know

🗣️

Many-Shot Jailbreaking AI research from Anthropic and quick thoughts from Gary Marcus

Anthropic recently shared a paper about the vulnerability they found when using jailbreaking tactics. Gary Marcus highlights the potential threat, noting “In LLMs, the attack surface appears to be infinite. That can’t be good.”

🔨

Did you see that Anthropic also released that "Claude is capable of interacting with external client-side tools and functions, allowing you to equip Claude with your own custom tools to perform a wider variety of tasks"?

🎉

The Opera One browser is “adding experimental support for 150 local LLM variants from approximately 50 families of models to our browser. This marks the first time local LLMs can be easily accessed and managed from a major browser through a built-in feature.”

This looks worthwhile to test but be prepared to allocate 2-10 GB of local storage space for each model.

🤔

“GPT-4 is 82% more persuasive than humans…There’s little doubt that AI will soon be the greatest manipulator of opinion the world has ever seen…”

This New Atlas article will spark discussions about the broader societal implications around persuasive AI being used for personalized ads, manipulation by entities, and shaping opinions/decisions.

🔦

In case you haven’t read it: “This guide is designed specifically for those looking to understand the bare basics of using open-source ML. Our goal is to demystify what Hugging Face Transformers is and how it works, not to turn you into a machine learning practitioner, but to enable better understanding of and collaboration with those who are.“

AI - Word of the Week

 Variational Dropout

Variational dropout is a technique used in deep neural networks to better capture model uncertainty and prevent overfitting

Can we keep AI Models humble and teach them their own limitations? Let's discuss "variational dropout".

During the training process, regular dropout involves randomly turning off some neurons (the units that make up a neural network). This prevents the neurons from becoming too co-dependent on each other, which can lead to overfitting (when the algorithm can only work on specific examples within the training data).

Variational dropout takes this a step further by randomly turning off neurons when making predictions on new data, not just during training. The unpredictable shutting down of different neurons each time it makes a prediction (inference), means the network will give slightly different outputs for the same input data. If the outputs vary a lot, it shows the model is uncertain about that particular input.

The variance in the predictions allows the model to estimate how uncertain it is. High variance means the model is very uncertain about that data point. This allows the model to effectively say "I don't know" for inputs very different from what it saw during training. The higher variance helps AI developers see where a model is struggling and even hints at what’s happening in the Black Box.

Tips for Effectively Using Variational Dropout or Compensating for When it Occurs:

  1. Simplify: Tune the dropout rate to help balance uncertainty estimation with overall model accuracy.

  2. Use Monte Carlo Sampling: This can give a better estimate of the predictive mean and uncertainty than a single forward pass.

  3. Combine with Other Techniques: Look to like test-time data augmentation, ensemble methods, or probabilistic modeling to get a more comprehensive uncertainty estimate.

Why Care? 

Applying dropout at test time prevents individual pathways from becoming too specialized, forcing the model to maintain redundant representations that generalize better. Variational dropout helps deep neural nets avoid overconfident predictions, and better express uncertainty. This helps humans spot flags for where to double-check the AI’s output or even take over. It’s an important technique to build more trust and safety for the AI output.

🌐 ChatGPT for Everyone!

AI Image Creation for Everyone - Starts April 15

In today's visually driven world, captivating images are crucial for effective marketing and communication. We’ve designed a comprehensive AI art course that empowers you to harness state-of-the-art tools like Midjourney, Leonardo, and DALL-E 3. Starting April 15, spaces are still available!

Join your instructors Wes Shields & Goda Go, and learn firsthand how to create stunning visuals for presentations, campaigns, and brand storytelling that will resonate with your audience and drive business success.

Use the discount code SAVE10 and get 10% off!

Enroll now and bring the power of AI Image Generation to life, and into your skillset!

Research and Discoveries

Evaluating AI's Potential in Control Engineering

A Benchmark Study with GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra

Picture this; a group of researchers decide to throw a party, and the guests are AI models named GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra. The venue? The challenging world of undergraduate-level control problems. This paper - Capabilities of Large Language Models in Control Engineering explored via a dataset named ControlBench how well the 3 models - GPT-4, Claude 3 Opus, and Gemini 1.0 Ultra tango with mathematical theories and engineering designs.

Key Findings/Implications

So, who shone in this study? The spotlight shone on Claude 3 Opus, outperforming GPT-4 and Gemini 1.0 Ultra in solving the problems. It demonstrated superior accuracy and reasoning capabilities on the ControlBench dataset. This included better problem-solving abilities across a variety of classical control design challenges, showing enhanced understanding and explanation quality compared to the other two models evaluated in the study.

But it wasn’t all smooth as across all these AI models, they stumbled when visuals entered the chat. Diagrams? Plots? They scratched their heads. But here’s a twist: despite these hiccups, they still showed promise in becoming the next big thing in control engineering.

Framing the Discussion

If AI can get good at control engineering, imagine the possibilities! We're talking about AI designing systems that could fly planes, control robots, or manage your home's heating more efficiently. This study is like the opening scene of an epic movie, where AI starts to blend into areas we once thought were solely human territory. Sure, there are bumps and learning curves, but the potential looks to point to a way to create a compass that can point to further ground-breaking discoveries and innovations

Putting it into Daily Workflows

The principles of Control Engineering are transferrable to many fields. For example, Marketers and Educators can bring them into their work by looking at data to understand the "system dynamics". This then sets the stage to predict trends much like a control engineer anticipates the behavior of a complex system.

For Marketers this would follow these steps: Data Collection, Modelling Market/Audience/Content Dynamics, Simulation & Prediction, and then Adaptive Strategies. So, if the model predicts a rising trend you can shift product development or marketing focus to align with it.

Educators wanting to bring Control Engineering principles into their work could craft dynamic learning environments. They would start with Student Modelling, Curriculum Design, and Feedback Loop and be able to attain Personalized Learning Paths. Thus, each student can be better challenged and supported, leading to a more engaging and effective learning experience.

The big takeaway: This research isn't just for the tech-savvy. It's a glimpse into a future where AI could be our co-pilot in navigating the complexities of our worlds, both big and small. From making smarter decisions to designing more responsive systems, this paper shows AI’s increasing proficiency in complex problem-solving across multiple industries.

🤝 Together, We Innovate

At Synthminds, we're convinced that it's through partnership with our community that we can fully harness AI's capabilities. Your perspectives, journeys, and inquisitiveness propel our collective journey forward.

We very much wish for this newsletter to be what YOU want to read weekly. Thanks for reading and please join us again next week!

Reply

or to participate.