25K traders bet on ChatGPT’s stock picks, AI sucks at dice throws, and more – Cointelegraph Magazine


Almost 25,000 investors have signed up to trade alongside ChatGPT as they follow the GPT Portfolio experiment from copy trading firm Autopilot.

The traders have bet a combined $14.7 million on the AI’s stock picks, which would average about $600 each if they all invested after signing up. They’re hoping to take even a small slice of a purported 500% return from one of the strategies backtested in academic research.

The GPT Portfolio gets the AI to analyze 10,000 news articles and 100 company reports to select 20 stocks for the $50,000 portfolio, updated each week. The initial picks included Berkshire Hathaway, Amazon, D.R. Horton and Davita Health. After two weeks, the portfolio is up around 2%, which is pretty much the same as the stock market. 

Interestingly the bottom five picks lost more in percentage terms than the top five gained — Dollar Tree lost 17% after it missed earnings — so it might be more sensible in future to only invest in GPT-4’s best five or 10 ideas, but we’ll see how it works out.

The smaller-scale ChatGPT Crypto Trader account is tweaking a similar strategy that gets GPT-4s advice on when to go long on Ethereum. He says it shows a profit of 11,000% backtested to August 2017, but in the real-world experiment since January, the portfolio is up by a third, while the Ethereum price has gained 60%.

It’s worth being careful using AI for trading, however. Crypto derivatives platform Bitget recently abandoned its experiment of using AI on the platform due to the potential for misinformation. A survey of its users found 80% of users had a negative experience with the AI, including false investment advice and other misinformation. 

Bitget Managing Director Gracy Chen says:

“AI tools, while robust and resourceful, lack the human touch necessary to interpret market nuances and trends accurately.”

The GPT Portfolio hopes that CNN is on the money. (Autopilot)

Are LLM’s stupid?

There are two extremes when it comes to thinking about large language models like GPT-4: some people maintain they are stupid mansplaining bots that confidently blurt out fake information, while others believe they will lead to artificial general intelligence (equivalent or better than human intelligence). Researchers from Microsoft published a 155-page paper called “Sparks of General Intelligence” back in March, arguing the latter was the case, apparently super impressed that the GPT was clever enough to work out how to stack a book, nine eggs and a laptop on top of each other.

Demis Hassabis, the co-founder of DeepMind, thinks the rate of progress is set to continue, meaning we may be just “a few years, maybe a decade away” from AGI. But robotics researcher and AI expert Rodney Brooks argues that large language models like ChatGPT are not going to lead to AGI. He says they don’t understand anything and can’t logically infer meaning.

“What the large language models are good at is saying what an answer should sound like, which is different from what an answer should be.”

Another skeptic is AI writer Brian Chau, who is writing a three-part series called “Diminishing returns in machine learning.” He argues that AI development is bumping up against hardware limitations and the extravagant cost of training larger models and is starting to slow. He puts the chances of AGI at less than 5% by 2043

ChatGPT’s loaded dice

One task that’s beyond ChatGPT is rolling a die and giving you a random number. Ask it to do so, and it’ll invariably roll a four on its first go. Ask it for a number between one and 10, and it’ll pick seven. Ask it for a number between one and 30, and it’ll pick 17. (Bard is similarly nonrandom) One Redditor got it to roll a die 50 times, with the bot returning “31 fours, 12 threes, 4 sixes, 3 fives, and no ones or twos.” 

What seems to be happening is the “random” numbers it produces are the ones that appear most frequently in its training data — because humans pick those same random numbers most often as well. In fact, the phenomenon of people picking seven when asked to choose a number between one and 10 is so well known, it’s even a magic trick/pick-up technique that The Game author Neil Strauss used to impress Britney Spears when he correctly “guessed” a number she’d chosen.

Read also


Agents of Influence: He Who Controls The Blockchain, Controls The Cryptoverse


Joe Lubin: The truth about ETH founders split and ‘Crypto Google’

AI job losses may inspire a revolution

Goldman Sachs suggests 300 million jobs will be lost to automation, while The World Economic Forum tips 83 million jobs will go in the next five years alone. Unlike previous waves of job losses as automation increased, these are likely to be concentrated among white-collar workers. James Marriot argues in The Times of London that this could lead to a violent revolution. He points to the French and Russian revolutions being led by disgruntled lawyers and journalists and highlights academic research across 78 countries that shows that middle earners are more likely to participate in political unrest than the poor.

He also highlights Peter Turchin’s theory of political unrest being linked to “elite overproduction” — universities churning out highly educated people who can’t get the jobs and status they believe they’re owed. Marriot writes:

“These are precisely the people who may be about to graduate into a world that has no need for them. If the worst happens, they are going to be angry.” 

Adobe Firefly

More than 70 million images were created in the first month after Adobe released the beta of its generative AI tool Firefly. The “generative fill” feature turns anyone into a Photoshop wiz; you simply circle an area of a pic then type in what you’d like to see — say a pool of reflective water. You can replace objects or backgrounds, and “expand” images to be as large as you like. 

Users have been having great fun expanding classic album covers and famous artworks.

Van Gogh
Van Gogh’s The Starry Night in widescreen (Twitter)

The tech will be coming to Google’s Bard soon. Another new project called DragGAN enables users to manipulate 2D images as if they were in 3D, say by spinning a car around to see it from different angles, opening a dog’s mouth to show its teeth, and many more uses.  

Specialized AIs are the future

According to the trademark filing, JPMorgan is developing a ChatGPT service called IndexGPT to help its clients select investments. The news comes hot on the heels of BloombergGPT, which was trained on 40 years worth of financial data. Bloomberg claims it performs much better than GPT-4 in terms of financial analysis, although you’ll need to pay $24,000 a year to access it via Bloomberg Terminal, so we’ll take their word for it. 

Research does suggest, however, that AIs trained using domain-specific data in areas like science and medicine outperform general-purpose LLMs. Intel is hoping this fact will pay off with its 1 trillion parameter Aurora AI model, which is being trained on scientific texts.

The aim is to work out the biological processes related to cancer and other diseases and come up with drugs to treat them. The enormous potential of this approach was highlighted by the news this week that an AI system combed through 6,680 chemical compounds and found one that kills an antibiotic-resistant superbug called acinetobacter baumannii, potentially saving millions of lives over the coming decades.

Video of the week

Wes Anderson directs Avatar with the Peculiar Pandora Expedition.

AI creator Bilawal Sidhu transformed the creepy Tesla robot army video into an even creepier Terminator-inspired video in an hour.

Andrew Fenton

Andrew Fenton

Based in Melbourne, Andrew Fenton is a journalist and editor covering cryptocurrency and blockchain. He has worked as a national entertainment writer for News Corp Australia, on SA Weekend as a film journalist, and at The Melbourne Weekly.

Source link

You might also like
Leave A Reply

Your email address will not be published.