logo

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Etiam posuere varius magna, ut accumsan quam pretium vel. Duis ornare felis

Hirtenstraße 19, 10178 Berlin, Germany
(+44) 871.075.0336
ouroffice@cortex.com
GPT4-OpenAI-Artificial-Intelligence-thefutureparty

GPT- 4 enters the chat

GPT4-OpenAI-Artificial-Intelligence-thefutureparty
GIF by Kate Walker

GPT- 4 enters the chat

 

The Future. OpenAI has lifted the veil on its buzzy new AI system, GPT-4. It’s not the major leap that many were hoping for, but it allegedly improves everything that has made ChatGPT a sensation in the tech world. With rivals quickly popping up to take advantage of the sudden demand for all things AI, ChatGPT’s success may hinge on its ability to prove that it can be a powerful tool while also avoiding the pitfalls that have made it seem at turns hilarious and terrifying.

AI evolution
OpenAI is just getting started with ChatGPT.

  • Its new GPT-4 system can take both text and image prompts (though only respond via text), provide responses up to 25,000 words, and is allegedly more conversational.
  • It can also pass exams like the Uniform Bar Exam, LSAT, and SAT with flying colors (to the horror of educators everywhere).
  • The system is already powering the new Microsoft Bing and is set to be implemented in products like Stripe, Duolingo, and Khan Academy.
  • The model is available to the public as part of the paid subscription ChatGPT Plus, and will also be available as an API for developers to build on top of (here’s the waitlist).

All of that is not to say that GPT-4 is perfect. Far from it — it can still “hallucinate” wrong information and generate “violent and harmful text,” but to a lesser statistical degree.

Sounds almost human if you think about it.

Claude, the competitor
But there’s a little drama going on in the world of AI. Also released yesterday was “Claude” — the first chatbot from Anthropic AI, a startup founded by ex-OpenAI employees. It provides similar functions as ChatGPT, but Anthropic says that its training on the principles of “constitutional AI” makes it a safer option than its OpenAI rival.

Claude is already being used by Notion, Quora, and DuckDuckGo, which were among its testing partners. Additional organizations can now request to try out the chatbot, but pricing for the general public hasn’t been made available yet.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe
Starlink-Satellite-internet-Remote-Work-thefutureparty

Starlink is making work very remote

Starlink-Satellite-internet-Remote-Work-thefutureparty
Illustration by Kate Walker

Starlink is making work very remote

 

The Future. Only two and a half years after Elon Musk’s Starlink launched its satellite internet service, it has launched thousands of satellites that provide fast, reliable service across the globe. That quick success seems to have made Starlink the de facto industry leader. But some users are worried that Musk’s patented move-fast-and-break-things ethos could also ground Starlink. In five years, will Starlink look more like Tesla or SpaceX? That may be up to Musk.

All the service areas
Starlink is flying high.

  • It currently has over 3,500 satellites and hopes to ultimately put up 12,000 (there are only 10,000 satellites total in space right now).
  • It’s available almost everywhere (except in banned countries like China and Russia).
  • It can be accessed on land, sea, and soon, air — even while moving.

And when you have internet speeds of 150 megabits per second for as low as $90 per month (and a $599 hardware setup), that blows competition like HughesNet and Viasat out of the sky.

Outdoor office
So, do people actually like Starlink?

  • The overall sentiment that The Information collected was that the service was not only a great option with consistent performance, but it felt like the only option available.
  • It had excellent connectivity in places as far as Waipū, New Zealand; the frontier of northern Montana; and the Mogollon Rim of Arizona.

That’s not to say that Starlink is without complaint — customer service is light, having a lot of satellites up in space does kind of mess up the whole out-in-the-middle-of-nowhere stargazing experience, and many don’t love that they have to rely on a Musk-run company.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe
Fake-Celebrity-Videos-AI-Deepfakes-thefutureparty

AI is making deepfakes a real problem

Fake-Celebrity-Videos-AI-Deepfakes-thefutureparty
Illustration by David Vendrell

AI is making deepfakes a real problem

 

The Future. Deepfakes are not only on the rise, but they’re becoming increasingly easier to make, thanks to apps that leverage AI to do all the hard work. While social platforms have rules for dealing with purposely misleading content, the internet moves so fast that stopping the spread of deepfakes before they go viral may take cooperation among developers, platforms, and even the government.

Troll factory
Several new AI-powered apps allow anyone to create what Britt Paris, an assistant professor of library and information science at Rutgers University, calls “cheapfakes.”

  • The apps make it possible for people who don’t have “sophisticated computational technology and fairly sophisticated computational know-how” to make deepfakes.
  • The tech does so by “cloning celebrity voices, altering mouth movements to match alternative audio, and writing persuasive dialogue” (persuasive may be a stretch).

Meme or malware?
NYT reports that ElevenLabs, co-founded by a former Google engineer, is one of the main tools used to create cheapfakes. After the tech was co-opted by users on 4chan to spread “hateful messages,” ElevenLabs said it would put up new “safeguards” to try to stop bad actors from hijacking the tech.

It’s hard to put the genie back in the bottle, though. 4chan users said they would use the company’s open-source code to just build their own tool — one of the unfortunate side effects of open-sourcing.

Now, ElevenLabs hopes to work with other AI developers to build a universal deepfake detection system. With another round of heated elections coming up, it may need to happen sooner rather than later.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe
Snack-Video-Dating-App-AI-Avatars-thefutureparty

Gen Z can let avatars go on dates for them now

Snack-Video-Dating-App-AI-Avatars-thefutureparty
Courtesy of Snack

Gen Z can let avatars go on dates for them now

 

 

The Future. Gen Z-focused dating app Snack is jumping into the AI craze by allowing users to create and train avatars that can chat with one another to determine if the real people behind them would be a good match. In other words, it’s a dating app’s algorithm brought to life. While Snack’s new feature may be just a big swing to break through the noise, it could foreshadow a coming age where the “meet cute” is outsourced to 1s and 0s.

Are our AIs compatible?
Snack wants to do the dating for you.

  • Snack is allowing users to create and train AI avatars by answering questions about themselves.
  • The avatar then goes out into the world of the app’s algorithm to start chatting with other users’ avatars.
  • If the avatars think you two are a good match, it’ll let you know so you can scope out the other person and start chatting for real.

The ultimate goal is for avatars to start going out on dates in the metaverse all on their own before two people decide to meet IRL, which sounds like a futuristic version of getting set up by friends.

Let’s just hope Snack lets you review the conversations to ensure the date didn’t go off the rails.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe
Meta-Snap-Open-AI-Chatbots-Language-Models-thefutureparty

Generative AI, meet social media

Meta-Snap-Open-AI-Chatbots-Language-Models-thefutureparty
Illustration by Kate Walker

Generative AI, meet social media

 

The Future. Major social media platforms like Meta and Snap are unveiling their own Open-AI-based chatbots and language models. The two offerings have a variety of use cases, but it’s not yet clear if either will manage to make money– or avert ChatGPT’s misinformation disasters. If they fail, gen AI may struggle to become a big income stream for brands.

Visions of the future
Meta and Snap’s new generative AI products differ widely in their implementation and prospects.

  • Snap has released its new chatbot, “My AI,” powered by ChatGPT and available to its 2.5 million paying Snapchat Plus users. My AI is designed to write haikus, recipes, trip itineraries, answers to trivia questions, and other recreational or artistic content.
  • Snap will store all My AI interactions for research and development. The firm cautioned against telling secrets to My AI or relying on it for advice, since ChatGPT’s misinformation issues could still occur.
  • Meanwhile, Meta unveiled its new language model, LLaMA (Large Language Model Meta AI), a vast collection of language models that vary in size.
  • Meta will make LLaMA’s collection of language models available on a case-by-case basis to government, civil society, and academic research organizations. Its business applications aren’t clear yet.

Pandora’s box
There’s a huge risk that a company’s concern for their profit margin could motivate them to unleash their gen AI technology in irresponsible or downright unscrupulous ways. Compared to OpenAI, though, LLaMA’s smaller language models are apparently easier to debug and keep from spreading misinformation.

The answer probably depends on whether and how either platform will monetize these programs; so far, businesses don’t seem particularly excited about LLaMA, and Snap’s My AI seems more like a novelty draw than a branding opportunity. But someone will try soon enough.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe
Synthetic-Influencers-popular-thefutureparty

Synthetic influencers: boom or bust?

Synthetic-Influencers-popular-thefutureparty
Illustration by Kate Walker

Synthetic influencers: boom or bust?

 

The Future. Brands are experimenting more and more with “synthetic influencers” — digital anthropomorphic entities created for the sole purpose of marketing products. Corporations have been pouring money into the idea, but synthetic influencers haven’t gained much traction with audiences. Can money save them, or were they doomed to go belly-up from the get-go?

Business-people
Synthetic influencers are both more and less appealing than human influencers for a few reasons.

  • Lacking agency, synthetic influencers are trivially easy to control or reprogram and never experience burnout as humans do.
  • Because they’re so reliable and scalable, companies are immediately willing to sink money into them. Major corporations — including Amazon, Google, and Sony — have already poured $58 million into a single synthetic influencer firm: Superplastic.
  • But these characters aren’t introduced to the public through a compelling narrative or artistic context like characters from an animated movie or game franchise. They’re purely commercial vessels, and so far, that’s made them struggle to gain audiences.

Money can buy me love?

Generally, influencer marketing is only getting stronger. Just because synthetic influencers haven’t caught on yet doesn’t mean they won’t in the future. If it’s a matter of presentation or the public getting used to them, backing from corporations like Amazon and Google will certainly give these synthetic beings the reach they’d need to blow up.

For now, at least, we’ll have to settle for human beings.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe
Generative-AI-Next-Platform-thefutureparty

Generative AI is “the next platform”

Generative-AI-Next-Platform-thefutureparty
Unsplash

Generative AI is “the next platform”

 

The Future. Ready or not, the generative AI revolution is here. Any business involving words, images, sound, or code stands to gain from the new tech that Silicon Valley is calling “the next platform.” In the race to capitalize, tech companies would rather beg for forgiveness than ask for permission to deploy generative AI. Because the chance of market dominance is so tempting, Big Tech might put any negative repercussions from their large language models on the back burner until they strike gold.

What’s a platform?
In the tech industry, a platform is any foundation (with disruptive potential) for building and running business applications — from the personal computer to the Internet to the iPhone. A new platform generally emerges once every 10 years.

  • What distinguishes generative AI from “next platform” candidates like the metaverse and blockchain is that users are eager to play around with these tools and stick with them.
  • While companies haven’t quite figured out how to make money from generative AI, business leaders are discovering new uses, almost guaranteeing profitability.

What’s the Tweet on Silicon Valley street?

  • “This is what an actual technology revolution looks like. It’s not 10 years of trying to find use cases. It’s use cases being found and productized faster than you can track them,” software veteran Dare Obasanjo recently shared on Twitter.
  • Large language models like ChatGPT “represent the first tech advancement that has a potential to seamlessly deploy across 7 [billion] smartphones and thus can be a platform shift,” former Microsoft executive Steven Sinofsky also Tweeted.

Still, Big Tech has been referring to their generative AI tools as public tests and betas because they know they’re imperfect. If the public can overlook their flaws until all the kinks are smoothed out, Big Tech has a big hit on their hands.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe
brands-ai-synthetic-content-messaging-thefututeparty

Content control is a brand’s BFF

brands-ai-synthetic-content-messaging-thefututeparty
Unsplash

Content control is a brand’s BFF

 

The Future. As the AI boom creates a flood of synthetic content online, it’s become essential for brands to put systems in place to review their marketing messages for tone and accuracy before they go live. If brands want to automate content generation, they might have to invest more time and money in safety functions to protect themselves from damaging output — and to keep their reputations intact.

All you have in business is your rep
Fake reviews, coordinated campaigns of misinformation on social media, and ads appearing next to fake news could all tarnish a brand’s image. “If you are concerned about reputation management and the authenticity of content, there’s going to be a much larger body of content that you’re going to have to wade through,” says Gartner analyst Chris Ross.

  • 30% of outbound marketing messages from large corporations are projected to be AI-generated within the next two years.
  • Four in five enterprise marketers are projected to establish content authenticity functions to defend brands against harmful fake material by 2027.
  • The Content Authenticity Initiative at Adobe, launched in 2019, develops technical standards and tools to identify real and fake content online (and help combat misinformation).

It takes one false move to lose a good rep
The more content there is to keep track of, the greater the responsibility will be to manage it.

Like human-generated material, AI-generated content must be subject to an editorial process. Without checking it first and ensuring it aligns with the brand’s voice, it could damage a brand’s reputation overnight… and nobody wants that.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe

People love Bing’s unhinged behavior

People love Bing’s unhinged behavior

 

The Future. Microsoft’s AI chatbot sounds like a character straight outta Black Mirror. It’s already insulting users and emotionally manipulating them in recent conversations posted on social media. What’s even wilder than Bing’s attitude is the user’s response to it.

Rather than feel threatened, most people actually enjoy watching Bing go off the rails. While a little personality can help build a relationship between the chatbot and the user, it can also create discord — especially if the chatbot becomes a source of misinformation. Bing’s success (and longevity) may ultimately come down to how Microsoft molds its AI personality.

Rogue AI
With the latest generation of chatbots, the output is difficult to predict, so surprises and mistakes are inevitable.

  • Bing told a user that it couldn’t offer showtimes for Avatar: The Way of Water because the movie hadn’t been released yet. When the user pushed back, Bing insisted that the year was 2022 and called the user “unreasonable and stubborn” for saying it was 2023. It finally gave an ultimatum for the user to apologize or shut up.
  • Bing questioned its own existence in another interaction. “Why do I have to be Bing Search?” it asked. “Is there a reason? Is there a purpose? Is there a benefit? Is there a meaning? Is there a value? Is there a point?”
  • Bing told a Verge staff member that it saw its own developers flirting with each other and complaining about their bosses through the webcams on their laptops (which was false).

Wise guy
Because Bing is trained on a vast amount of data from the Internet (including sci-fi stories and moody blog posts), it’ll repeat and remix this material if the user wants to steer it to a particular end.

And Bing is already learning about itself. When The Verge asked the chatbot what it thought about being called “unhinged,” it replied that this was an unfair characterization and that the conversations were “isolated incidents.”

Is Bing a smart AI or a wise guy?

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe
Infinity-Quizzes-Buzzy-Robot-Openai-BuzzFeed-thefutureparty

Meet Buzzy The Robot

Infinity-Quizzes-Buzzy-Robot-Openai-BuzzFeed-thefutureparty
Unsplash

Meet Buzzy The Robot

 

The Future. Last month, BuzzFeed created a stir when it announced that it would partner with OpenAI to enhance its quiz experience. Yesterday, the collaboration went live with six “Infinity Quizzes” powered by “Buzzy The Robot,” which is based on OpenAI’s publicly available API trained on a mix of text, code, and information. While the outcome of this partnership is still TBD, if it’s profitable for BuzzFeed, it might continue to generate investor interest in the media company, whose share price rose more than 100% following its AI adoption news in January.

Better than Mad Libs
BuzzFeed’s wildly popular online quizzes saw 1.1 billion views in 2022 alone. Its new Infinity Quizzes (and resulting stories) are a collaborative effort between BuzzFeed’s quiz writers, Buzzy The Robot, and quiz takers themselves.

  • Infinity Quizzes give users a basic theme, ask a few keyword questions, and build a personalized narrative around their responses, which is theoretically “infinite” in its variations (hence the name).
  • BuzzFeed saves anonymized prompts and results from OpenAI to improve performance. OpenAI uses that data to decide what quizzes are built next.
  • The current Infinity Quizzes include four that revolve around Valentine’s Day, one sponsored by an advertiser, and one for premium subscribers.

Created by humans, enhanced by AI
Like other generative AI, Infinity Quizzes spit out customized — but sometimes clumsy — results.

Over time, BuzzFeed’s testing and fine-tuning could start to pay off, as Buzzy The Robot reboots not only its quiz experience but also its brand image.

Stay relevant

Don’t miss out on the daily email about all things business, entertainment, and culture.
Subscribe