Why generative adversarial networks (GANs) are awesome… and awesomely terrifying

There are few things we write about here more than artificial intelligence, and for good reason — it’s core to the future economy and a foundational offering from our firm. We exist in that space between what’s known and what’s possible, between things like mobile apps and databases and innovations like voice-based U/I or AI integrations powering a complete corporate digital transformation. But one thing we don’t always delve too deeply into is the types of artificial intelligence and what they’re good for respectively. And one such branch of AI has gotten a lot of press recently — the Generative Adversarial Networks, or GANs. They’re awesome, powerful and full of promise… and just as awesomely terrifying for what they could mean going forward.

Broad strokes of AI

We did write a post about whether something is actually harnessing AI recently, but it didn’t get too deep into the most common types of AI out there today, but rather clarified the need to understand what exactly AI is. So here are some of the broader strokes of AI for reference:

  • Most AI advancements and applications are based on a type of algorithm known as machine learning, in which you feed the algorithm a mountain of data, task it with finding patterns, and then instruct it to re-apply that pattern to new and/or emerging data
  • Deep learning is a cousin to machine learning — a subset if you will. Deep learning leverages neural networks to find and amplify even the smallest patterns within datasets
  • Neural networks are comprised of layers or arrangements of computational nodes (often a pairing of multiple GPUs and CPUs together) that then work together to analyze data. It’s generally modeled after a human brain and its neurons.

Now, there are ever more subsets within each of these categories, and these categories are not necessarily prescriptive for all of AI, but they are good broad strokes to define what we’re talking about.

What are GANs good for?

So, machine learning and deep learning (which harnesses neural networks) are great for recognizing and/or learning patterns in/from data (which don’t have to be numbers, the data can be anything — chess moves, photos, eye-tracking results, etc.). But what if you want to create data patterns instead of just identify them?

That’s where GANs come into play.

According to the MIT Technology Review, “GANs are having a bit of a cultural moment. They are responsible for the first piece of AI-generated artwork sold at Christie’s, as well as the category of fake digital images known as ‘deepfakes‘.”

In a GAN setup, unlike a traditional AI set up, you have two neural networks working in concert (but somewhat at cross purposes). Again according to MIT Technology Review:

“You start by feeding both neural networks a whole lot of training data and give each one a separate task. The first one, known as the generator, must produce artificial outputs, like handwriting, videos, or voices, by looking at the training examples and trying to mimic it. The second, known as the discriminator, then determines whether the outputs are real by comparing each one to the same training examples.

In this way, AI networks work against each other, together, to produce something incredibly lifelike. This can be terrifying in its ability to make presidents seem to say whatever you want them to in video, and as fake news spirals out of control, it’s not hard to imagine consumers not being able to tell the difference between a deepfake and the real thing.

The good news, at least for now, is that GANs have some very real limitations that will make them hard to weaponize too soon:

“They need quite a lot of computational power and narrowly-scoped data to produce something truly believable. In order to produce a realistic image of a frog, for example, it needs hundreds of images of frogs from a particular species, preferably facing a similar direction. Without those specifications, you get some really wacky results…”

Ultimately, GANs could prove incredibly useful in fields like image generation, media post-production, medicine, etc. But, they can just as easily be used (one day) to terrifying ends of unknown scope. As with so many things AI, it’s a mixed bag heavily dependent on the intentions of its wielder.

 



Comments

Leave a Reply

Your email address will not be published. Required fields are marked *

one × 3 =

Jeff Francis

Jeff Francis is a veteran entrepreneur and founder of Dallas-based digital product studio ENO8. Jeff founded ENO8 to empower companies of all sizes to design, develop and deliver innovative, impactful digital products. With more than 18 years working with early-stage startups, Jeff has a passion for creating and growing new businesses from the ground up, and has honed a unique ability to assist companies with aligning their technology product initiatives with real business outcomes.

Get In The Know

Sign up for power-packed emails to get critical insights into why software fails and how you can succeed!

EXPERTISE, ENTHUSIASM & ENO8: AT YOUR SERVICE

Whether you have your ducks in a row or just an idea, we’ll help you create software your customers will Love.

LET'S TALK

When Will Your Software Need to Be Rebuilt?

When the software starts hobbling and engineers are spending more time fixing bugs than making improvements, you may find yourself asking, “Is it time to rebuild our software?” Take this quiz to find out if and when to rebuild.

 

is it time to rebuild our software?