image
image
image
image
Artificial Intelligence (AI) - key learnings ... Graphic image: Psychology Today
Collated by Maireid Sullivan
2015 - updated 2024
Work in progress

Minimalism: Take a pattern and make it multiply and mutate as it goes.
Like a seed: Human creativity reaps a harvest, but we deserve to know when our 'work' is plagiarised by artificialy generated intelligence: "AI"

"We are allowing the technologists to frame this as a problem that they’re equipped to solve. … That’s a lie. People talk about AI as if it will know truth. AI’s not gonna solve these problems. AI cannot solve the problem of fake news. Google doesn’t have the option of saying, “Oh, is this conspiracy?
Is this truth?” Because they don’t know what truth is. …
They don’t have a proxy for truth that’s better than a click."

- Cathy O’Neil, author, the award winning Weapons of Math Destruction, 2016, Penguin Australia
"If you're not prepared to be wrong, you'll never come up with anything original." - Sir Ken Robinson (1950-2020)

Selected References

2024

December 2024

Meet Kate Crawford: the researcher using art to demystify AI
2 December 2024, State Library of Victoria, Australia
"an ARIA-nominated musician, recognised by TIME as one of the 100 most influential people in AI"

Excerpt: Professor Kate Crawford is a world-leading scholar in artificial intelligence, who has built a career studying the social and political implications of AI – while also breaking new ground in creative approaches to the subject.

She is a veteran in the field

While AI might seem like a recent innovation, Crawford has been studying big data, algorithms and machine learning for over 20 years.

Her work focuses on ‘opening the black box of AI’, not only to expose the biases, assumptions, errors, and ideological positions within AI technologies, but also the complex chains of labour and production which underpin them:

‘This means looking at the natural resources that drive it, the energy that it consumes, the hidden labour all along the supply chain, and the vast amounts of data that are extracted from every platform and device that we use every day,’ she has said.

Her art has been shown at MOMA in New York, London’s V&A and Fondazione Prada in Milan

As well as her scholarly and journalistic outputs, Crawford uses artistic data visualisation to communicate the invisible systems and processes that power AI.

Her artistic outputs include Anatomy of an AI System, created with fellow artist and researcher Vladan Joler, which explores the life cycle of an Amazon Echo smart speaker; Training Humans, in collaboration with artist and photographer Trevor Paglen, which examines how images are used to train AI systems; and Calculating Empires: A Genealogy of Technology and Power, 1500-2025, which maps over 500 years of how communication and computation are intertwined with systems of power and control – from the Gutenberg printing press right through to large language learning models.

Crawford has explained that she uses art as part of her research output because discussions about AI are too important to keep within a rarefied academic context: ‘Given the enormous social and political impact, this has to be a set of issues and questions that are as public as possible,’ she has said.

She was a founding member of the feminist collective Deep Lab

Formed in 2014, this cyber-feminist collective comprised a diverse group of researchers, artists, writers, engineers, and cultural producers. Their work, which spanned lectures, publications, contemporary art, public programming and performances, aimed to combat discrimination towards marginalised people at the hands of ‘corporate dominance, data mining, government surveillance, and a male-dominated tech field.’ ... >>> more

October 2024

The 2024 Nobel Prize in Economics-
This year’s Nobel prize in economics awarded to team that examined what makes some countries rich and others poor
by John Hawkins, University of Canberra, October 15, 2024, The Conversation

Excerpts:
The 2024 Nobel Prize in Economics has been awarded to three US-based economists who examined the advantages of democracy and the rule of law, and why they are strong in some countries and not others.
Daron Acemoglu is a Turkish-American economist at the Massachusetts Institute of Technology, Simon Johnson is a British economist at the Massachusetts Institute of Technology and James Robinson is a British-American economist at the University of Chicago.
The citation awards the prize “for studies of how institutions are formed and affect prosperity”, making it an award for research into politics and sociology as much as economics.
... Last year Acemoglu and Johnson published Power and Progress: Our Thousand-Year Struggle Over Technology and Prosperity.
... In May this year Acemoglu wrote about artificial intelligence, putting forward the controversial position that its effects on productivity would be “nontrivial but modest”, which is another way of saying “tiny”. Its effect on wellbeing might be even smaller and it was unlikely to reduce inequality. ...

August 2024

AI was born at a US summer camp 68 years ago.
Here’s why that event still matters today

By Sandra Peter, University of Sydney

Excerpt
... in 1956, in a quiet corner of New Hampshire ...
The Dartmouth Summer Research Project on Artificial Intelligence, often remembered as the Dartmouth Conference, kicked off on June 18 and lasted for about eight weeks. It was the the brainchild of four American computer scientists – John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon – and brought together some of the brightest minds in computer science, mathematics and cognitive psychology at the time.

These scientists, along with some of the 47 people they invited, set out to tackle an ambitious goal: to make intelligent machines.

As McCarthy put it in the conference proposal, they aimed to find out “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans”.

The birth of a field – and a problematic name

The Dartmouth Conference didn’t just coin the term “artificial intelligence”; it coalesced an entire field of study. It’s like a mythical Big Bang of AI – everything we know about machine learning, neural networks and deep learning now traces its origins back to that summer in New Hampshire.

But the legacy of that summer is complicated.

Artificial intelligence won out as a name over others proposed or in use at the time. Shannon preferred the term “automata studies”, while two other conference participants (and the soon-to-be creators of the first AI program), Allen Newell and Herbert Simon, continued to use “complex information processing” for a few years still.

But here’s the thing: having settled on AI, no matter how much we try, today we can’t seem to get away from comparing AI to human intelligence.

This comparison is both a blessing and a curse. . . .

July 2024

Open Source AI is the Path Forward
By Mark Zukerberg, Founder and CEO, Meta

Excerpt
. . . Today we’re taking the next steps towards open source AI becoming the industry standard. We’re releasing Llama 3.1 405B, the first frontier-level open source AI model, as well as new and improved Llama 3.1 70B and 8B models. In addition to having significantly better cost/performance relative to closed models, the fact that the 405B model is open will make it the best choice for fine-tuning and distilling smaller models.
Beyond releasing these models, we’re working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models.
. . .
Why Open Source AI Is Good for Developers
. . .
Why Open Source AI Is Good for Meta
. . .
Why Open Source AI Is Good for the World


June 2024

The Love Letter Generator That Foretold ChatGPT
By Patricia Fancher, June 26, 2024, JSTOR

Excerpt:
In the early 1950s, small, peculiar love letters were pinned up on the walls of the computing lab at the University of Manchester.

Two of them published by Christopher Strachey read:

Darling Sweetheart
You are my avid fellow feeling. My affection curiously clings to your passionate wish. My liking yearns for your heart. You are my wistful sympathy: my tender liking.
Yours beautifully
M U C

Honey Dear
My sympathetic affection beautifully attracts your affectionate enthusiasm. You are my loving adoration: my breathless adoration. My fellow feeling breathlessly hopes for your dear eagerness. My lovesick adoration cherishes your avid ardour.
Yours wistfully
M U C

These are strange love letters, for sure. And the history behind them is even stranger; examples of the world’s first computer-generated writing, they’re signed by MUC, the acronym for the Manchester University Computer. In 1952, decades before ChatGPT started to write students’ essays, before OpenAI’s computer generated writing was integrated into mainstream media outlets, two gay men—Alan Turing and Christopher Strachey—essentially invented AI writing. Alongside Turing, Strachey worked on several experiments with Artificial Intelligence: a computer that could sing songs, one of the world’s first computer games, and an algorithm to write gender-neutral mash notes that screamed with longing.
. . .
On May 15, 1951, Turing delivered a short radio broadcast titled “Can Digital Computers Think?” for the BBC Home Service. It was a question both he and Strachey were exploring. In his lecture, Turing asks listeners to imagine the computer as a mechanical brain, similar to but not exactly like a human brain. A computer can learn, it can be trained, and with time, Turing said, it can exhibit its own unique form of intelligence. He noted one particular difficulty: the computer can do only what the human programmer stipulates. It lacks free will.

“To behave like a brain seems to involve free will,” Turing continues, “but the behavior of a digital computer, when it has been programmed, is completely determined.”

To solve this problem, he suggests a trick. The computer could use a roulette wheel feature to select variables randomly. Then, the computer would appear to make something original and new by adding in a touch of randomness. . . .

March 2024

Nvidia sued for allegedly using copyrighted books to train AI

by Leigh Mc Gowran, Silicon Republic, 11 March 2024

Nvidia has joined the list of companies facing lawsuits over claims that copyrighted material is being used to train their AI models.

Nvidia is facing legal trouble from a trio of authors, who claim the company used copyrighted books to train one of its AI models.

The US dispute – first reported by Reuters – involves authors Brian Keene, Abdi Nazemian and Stewart O’Nan, who claim their works were included in a dataset used to train NeMo, a Nvidia framework that is designed to build and customise generative AI models.

This dataset was taken down last October for reported copyright infringement. The three authors are seeking damages for copyrighted works that helped train NeMo’s large language models in the last three years, Reuters reports. The copyright dispute was filed with the US District Court for the Northern District of California.

AI v copyright

The case is the latest in the growing issue of copyright infringement for AI companies. Last year saw a number of authors file suits against both OpenAI and Meta, with claims that their AI models used their books as training material.

Those lawsuits claimed the large language models developed by Meta and OpenAI were trained on illegal “shadow libraries” – websites that contain pirated versions of the authors’ books.

Last year also saw thousands sign a letter written by the US Authors Guild, calling on the likes of OpenAI, Alphabet and Meta to stop using their work to train AI models without “consent, credit or compensation”.

Towards the end of 2023, The New York Times stepped into the ring with a high-profile lawsuit against both OpenAI and Microsoft. The media outlet claimed AI models such as ChatGPT have copied and use millions of copyrighted news articles, in-depth investigations and other journalistic work.

In January, OpenAI said it was “surprised and disappointed” by the lawsuit and added the newspaper was “not telling the full story”. It followed up in February with a claim that The New York Times “paid someone to hack OpenAI’s products” to generate “highly anomalous results” used as evidence in its AI copyright case.

In a statement sent to SiliconRepublic.com, Ian Crosby, a partner at Susman Godfrey and lead counsel for The New York Times, noted that OpenAI did not dispute that it copied millions of articles from the media outlet to build its products.

OpenAI is also facing a class-action lawsuit filed last year, which claims the company scraped the internet to train its generative AI chatbot and potentially violated the rights of millions as a result.

February 2024

The "Godfather of AI"
Monday 19 February 2024, University of Oxford (36:53)

‘Will Digital Intelligence Replace Biological Intelligence?’
Professor Geoffrey Hinton, CC, FRS, FRSC, received his PhD in artificial intelligence from Edinburgh in 1978.

Synopsis
Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosyncratic properties of a particular piece of hardware.  This requires a learning algorithm that can make use of the analog properties without having a good model of those properties.  Using the idiosyncratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge.  The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one, but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware.  Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT4 and Gemini can learn thousands of times more than any one person.  Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

The speaker

Geoffrey Hinton was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning and deep learning. His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification.
Geoffrey Hinton is a fellow of the UK Royal Society and a foreign member of the US National Academy of Engineering, the US National Academy of Science and the American Academy of Arts and Sciences. His awards include the David E. Rumelhart prize, the IJCAI award for research excellence, the Killam prize for Engineering, the IEEE Frank Rosenblatt medal, the NSERC Herzberg Gold Medal, the IEEE James Clerk Maxwell Gold medal, the NEC C&C award, the BBVA award, the Honda Prize the Princess of Asturias Award and the Turing Award.

January 2024

Sam Altman, American entrepreneur
Britannica, 22 January 2024

Excerpt
Sam Altman (born April 22, 1985, Chicago, Illinois, U.S.) ... has been compared to tech visionaries, including Steve Jobs and Bill Gates, and is known for his belief that artificial general intelligence (AGI) will be able to do anything that humans can. >>>more

2023
Back to top

December 2023

Time Magazine 2023 CEO of the Year, Sam Altman.

Sam Altman, CEO of the Year
Source: Henry George Foundation of Canada

December 2023

Sam Altman - TIME 2023 CEO of the Year
By Naina Bajekal and Billy Perrigo, San Francisco
December 6, 2023, TIME Magazine

Excerpts:
We’re speaking exactly one year after OpenAI released ChatGPT, the most rapidly adopted tech product ever. The impact of the chatbot and its successor, GPT-4, was transformative—for the company and the world. “For many people,” Altman says, 2023 was “the year that they started taking AI seriously.” Born as a nonprofit research lab dedicated to building artificial intelligence for the benefit of humanity, OpenAI became an $80 billion rocket ship. Altman emerged as one of the most powerful and venerated executives in the world, the public face and leading prophet of a technological revolution.
. . . “I think that’s the responsibility of capitalism,” Altman says. “You take big swings at things that are important to get done.”
Altman’s pursuit of fusion hints at the staggering scope of his ambition. He’s put $180 million into Retro Biosciences, a longevity startup hoping to add 10 healthy years to the human life-span. He conceived of and helped found Worldcoin, a biometric-identification system with a crypto-currency attached, which has raised hundreds of millions of dollars. Through OpenAI, Altman has spent $10 million seeding the longest-running study into universal basic income (UBI) anywhere in the U.S., which has distributed more than $40 million to 3,000 participants, and is set to deliver its first set of findings in 2024. Altman’s interest in UBI speaks to the economic dislocation that he expects AI to bring—though he says it’s not a “sufficient solution to the problem in any way.”
. . . Altman published a 10-point policy platform, which he dubbed the United Slate, with goals that included lowering housing costs, Medicare for All, tax reform, and ambitious clean-energy targets. He ultimately passed on a career switch. “It was so clear to me that I was much better suited to work on AI,” Altman says, “and that if we were able to succeed, it would be a much more interesting and impactful thing for me to do.
. . . Altman’s beliefs are shaped by the theories of late 19th century political economist Henry George, who combined a belief in the power of market incentives to deliver increasing prosperity with a disdain for those who speculate on scarce assets, like land, instead of investing their capital in human progress. Altman has advocated for a land-value tax—a classic Georgist policy—in recent meetings with world leaders, he says.
Asked on a walk through OpenAI’s headquarters whether he has a vision of the future to help make sense of his various investments and interests, Altman says simply, “Abundance. That’s it.” >>>more

November 2023

OpenAI and Microsoft Face New Copyright Lawsuit for Allegedly
‘Taking the Combined Works of Humanity Without Permission’

Dylan Smith November 22, 2023

Excerpt
OpenAI is now facing yet another copyright infringement lawsuit over the protected media that it allegedly used to train ChatGPT. Unlike similar complaints, however, the newly filed class-action also names Microsoft as a defendant.
Julian Sancton, the author of 2021’s Madhouse at the End of the Earth, only recently submitted the suit to a New York federal court. As highlighted, the case is one of several levied against OpenAI (which reinstated Sam Altman as CEO today) owing to the alleged unauthorized use of copyrighted works.
. . .
“OpenAI and Microsoft have built a business valued into the tens of billions of dollars by taking the combined works of humanity without permission,” the legal text begins, proceeding to explore at relative length the “close” relationship between the entities.
“While OpenAI was responsible for designing the calibration and fine-tuning of the GPT models—and thus, the largescale copying of this copyrighted material involved in generating a model programmed to accurately mimic Plaintiff’s and others’ styles—Microsoft built and operated the computer system that enabled this unlicensed copying in the first place,” the document continues.

Beyond this significant difference – which could, of course, have major implications down the line – other components of the suit resemble those within the above-mentioned complaints.
Specifically, Sancton’s action touches upon OpenAI’s alleged transition from a non-profit “into a complex (and secretive) labyrinth of for-profit corporate entities,” the sources from which OpenAI allegedly accessed protected media, and the importance of training ChatGPT on “quality” content.
. . .
“While OpenAI’s anthropomorphizing of its models is up for debate, at a minimum, humans who learn from books buy them, or borrow them from libraries that buy them, providing at least some measure of compensation to authors and creators,” the suit drives home towards its end. “OpenAI does not, and it has usurped authors’ content for the purpose of creating a machine built to generate the very type of content for which authors would usually be paid.”
Among other things, Sancton is seeking statutory and compensatory damages, disgorgement of profits, and an order permanently enjoining the described alleged infringement.

November 2023

What Sam Altman’s Firing Means for the Future of OpenAI
By Justin Sullivan, Nov. 18, 2023, WIRED
Sam Altman made OpenAI into a powerhouse by adding a profit-seeking arm to its utopian mission. After the board rejected his vision, the company’s remaining leaders must figure out a new path forward.

Excerpt:…
First, it’s important to remember that OpenAI was founded by Altman and Elon Musk to fulfill a mission. “The organization is trying to develop a human-positive AI. And because it’s a nonprofit, it will be owned by the world,” Altman told me in December 2015, just before the project was revealed to the world.

Though it seemed clear that Altman was the primary instigator, he was not yet OpenAI’s leader. But the company was squarely under his bailiwick: OpenAI was to be part of the research wing of the startup incubator Y Combinator, where Altman was CEO. Altman had started the division to chase the dream of using tech to solve the world’s knottiest problems when he became YC’s top executive. The original plan for OpenAI was to gather a relatively low number of the world’s best AI scientists and discover the keys to artificial general intelligence able to outperform humans on every dimension, inside a structure that gave ownership of this unimaginably powerful technology to the people, not giant corporations. >>>more

November 2023

Why the Godfather of A.I. Fears What He’s Built
Geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than ours.
By Joshua Rothman
November 13, 2023, The New Yorker
(Published in the print edition of the November 20, 2023, issue, with the headline “Metamorphosis.”)

Excerpt:
In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade. This process goes on all the time—it’s happening now, as you read these words—and its scale is beyond imagining. You have some eighty billion neurons sharing a hundred trillion connections or more. Your skull contains a galaxy’s worth of constellations, always shifting.
. . .
There are many reasons to be concerned about the advent of artificial intelligence. It’s common sense to worry about human workers being replaced by computers, for example. But Hinton has joined many prominent technologists, including Sam Altman, the C.E.O. of OpenAI, in warning that A.I. systems may start to think for themselves, and even seek to take over or eliminate human civilization. It was striking to hear one of A.I.’s most prominent researchers give voice to such an alarming view.
“People say, It’s just glorified autocomplete,” he told me...

September 2023

According to Statista, the global Artificial Intelligence market will reach 1.8 trillion USD by 2030 - across logistics, healthcare, manufacturing, banking, and retail.
Statista - Artificial Intelligence (AI) worldwide - statistics & facts:

- Artificial intelligence (AI), once the subject of people’s imaginations and the main plot of science fiction movies for decades, is no longer a piece of fiction, but rather commonplace in people’s daily lives whether they realize it or not.
- AI refers to the ability of a computer or machine to mimic the competencies of the human mind, which often learns from previous experiences to understand and respond to language, decisions, and problems.

September 2023

Chest radiography as a biomarker of ageing: artificial intelligence-based,
multi-institutional model development and validation in Japan.
Mitsuyama et al, The Lancet, Healthy Longevity, Sept. 2023, Vol. 4, Issue 9

Excerpt:
“We investigated the odds ratios (ORs) for various diseases given the difference between the AI-estimated age and chronological age (ie, the difference-age).

Findings
We included 101 296 chest radiographs from 70 248 participants across five institutions.
...
Interpretation
The AI-estimated age using chest radiographs showed a strong correlation with chronological age in the healthy cohorts. Furthermore, in cohorts of individuals with known diseases, the difference between estimated age and chronological age correlated with various chronic diseases. The use of this biomarker might pave the way for enhanced risk stratification methodologies, individualised therapeutic interventions, and innovative early diagnostic and preventive approaches towards age-associated pathologies.

July 2023

Google testing AI tool that writes news articles
Tool is said to have been pitched to several US news outlets as an aid for journalists rather than a replacement
20 July, 2023, The Guardian

Google is testing an artificial intelligence tool that can write news articles, in the latest evidence that the technology has the potential to transform white-collar professions.
The product, known as Genesis, uses AI technology to absorb information such as details of current events and then create news stories. The tool was pitched to the New York Times, the Washington Post, and the Wall Street Journal’s owner, News Corp as a “helpmate”, according to the New York Times.
Google said it was in the early stages of exploring the AI tool, which it said could assist journalists with options for headlines or different writing styles. It stressed that the technology was not intended to replace journalists.
...
Last paragraph
While newsrooms explore the possibility of using AI, an investigation this year by the anti-misinformation outfit NewsGuard found bots were already powering dozens of AI-generated content farms.

May 2023

What’s new in the world of generative AI?
May 31, 2023 TechCrunch

…With the rapid evolution of AI models, it’s hard to keep up.
But don’t worry. TechCrunch+ has your back.
GPT-4 was a big update
GPT-4 wipes the floor with GPT-3.5 (i.e., ChatGPT) 
The point of this research is… to find methods by which relatively simple AI models can improve themselves based on their “experiences,” for lack of a better word. If we’re going to have robots helping us in our homes, hospitals, and offices, they will need to learn and apply those lessons to future actions.

via The Conversation

- "The first waves of AI-generated text have writers and publishers reeling."
Authors are resisting AI with petitions and lawsuits. But they have an advantage: we read to form relationships with writers
July 26, 2023

- Replacing news editors with AI is a worry for misinformation, bias and accountability.
June 23, 2023

- AI pioneer Geoffrey Hinton says AI is a new form of intelligence unlike our own. Have we been getting it wrong this whole time?
May 4, 2023

- Calls to regulate AI are growing louder. But how exactly do you regulate a technology like this?
April 5, 2023

May 2023

“Power and Progress: Our Thousand-year Struggle over Technology & Prosperity.”
Daron Acemoglu and Simon Johnson, May 2023, Hachette Book Group

In this bold reinterpretation of economics and history, the consequence of sole reliance on AI is revealed–as well as what must be done to redirect innovation so it benefits all.
. . .
Power and Progress demonstrates the path of technology, and how it might be brought under control. Cutting-edge technological advances can become empowering tools, but not if all major decisions remain in the hands of a few hubristic tech leaders. 

Review:
An AI challenge only humans can solve
by Peter Dizikes MIT News Office, May 17, 2023
Excerpts
Economists Daron Acemoglu and Simon Johnson ask whether the benefits of AI will be shared widely or feed inequality. . . . they examine who reaped the rewards from past innovations and who may gain from AI today, economically and politically.
. . .
Today, AI is a tool of social control for some governments that also creates riches for a small number of people, according to Acemoglu and Johnson. “The current path of AI is neither good for the economy nor for democracy, and these two problems, unfortunately, reinforce each other,” they write.
. . .
What do Acemoglu and Johnson think is deficient about AI? For one thing, they believe the development of AI is too focused on mimicking human intelligence. The scholars are skeptical of the notion that AI mirrors human thinking all told — even things like the chess program AlphaZero, which they regard more as a specialized set of instructions. . . .

May 2023

Internet Archive (IA) - Managing AI's "Hallucination Challenge"
Internet Archive founder and digital librarian, Brewster Kahle explains how AI services can become more dependable, reliable, and trustworthy.

Anti-Hallucination Add-on for AI Services Possibility
by Brewster Kahle
May 3, 2023, Internet Archive Blog

Excerpt:
Chatbots, like OpenIA’s ChatGPT, Google’s Bard and others, have a hallucination problem (their term, not ours). It can make something up and state it authoritatively. It is a real problem. But there can be an old-fashioned answer, as a parent might say: “Look it up!”
Imagine for a moment the Internet Archive, working with responsible AI companies and research projects, could automate “Looking it Up” in a vast library to make those services more dependable, reliable, and trustworthy. How?
The Internet Archive and AI companies could offer an anti-hallucination service ‘add-on’ to the chatbots that could cite supporting evidence and counter claims to chatbot assertions by leveraging the library collections at the Internet Archive (most of which were published before generative AI).
By citing evidence for and against assertions based on papers, books, newspapers, magazines, books, TV, radio, government documents, we can build a stronger, more reliable knowledge infrastructure for a generation that turns to their screens for answers. Although many of these generative AI companies are already, or are intending, to link their models to the internet, what the Internet Archive can uniquely offer is our vast collection of “historical internet” content. We have been archiving the web for 27 years, which means we have decades of human-generated knowledge. This might become invaluable in an age when we might see a drastic increase in AI-generated content. So an Internet Archive add-on is not just a matter of leveraging knowledge available on the internet, but also knowledge available on the history of the internet.
Is this possible? We think yes because we are already doing something like this for Wikipedia by hand and with special-purpose robots like Internet Archive Bot Wikipedia communities, and these bots, have fixed over 17 million broken links, and have linked one million assertions to specific pages in over 250,000 books. With the help of the AI companies, we believe we can make this an automated process that could respond to the customized essays their services produce. Much of the same technologies used for the chatbots can be used to mine assertions in the literature and find when, and in what context, those assertions were made.
The result would be a more dependable World Wide Web, one where disinformation and propaganda are easier to challenge, and therefore weaken.
Yes, there are 4 major publishers suing to destroy a significant part of the Internet Archive’s book corpus, but we are appealing this ruling. We believe that one role of a research library like the Internet Archive, is to own collections that can be used in new ways by researchers and the general public to understand their world.
What is required? Common purpose, partners, and money. We see a role for a Public AI Research laboratory that can mine vast collections without rights issues arising. While the collections are significant already, we see collecting, digitizing, and making available the publications of the democracies around the world to expand the corpus greatly.
We see roles for scientists, researchers, humanists, ethicists, engineers, governments, and philanthropists, working together to build a better Internet.
If you would like to be involved, please contact Mark Graham at mark@archive.org.
... more

OPEN MEDIA: What’s happening with the Internet Archive
June 16, 2023
Closing paragraph:
The Internet Archive and the right to access information are vital for a functioning democracy and a free society. We must support the Internet Archive in its fight against corporations who seek to limit our access to knowledge. We must enshrine our right to access information and ensure that it is protected for future generations. The Internet Archive will appeal the court decision that ruled in favour of publishers. Support their fight. ...

MEDIUM: Digital Librarian for and of the World
June 14, 2023
Excerpt:
Brewster Kahle is a Silicon Valley maverick, a man on a mission, and founder of the non-profit Internet Archive. Inducted into the Internet Hall of Fame, he is an original and uncannily insightful deep thinker. Geeky and bespectacled, he never hesitates to speak his mind. It is not surprising that he is a voracious reader, loves books and wants to save them for generations.
Situated within a historic church in the Richmond District in San Francisco, the Internet Archive office may seem like yet another eccentric whim of a Silicon Valley native. Brewster chose the place because it resembles their logo — pillars that look like the Library of Alexandria, which was destroyed by fire in 48 BCE. ...

March 2023

'Framing Humans for AI'
Journal for the Philosophy of Language, Mind and the Arts (JOLMA)
issue 4, 30 March 2023
by Gabriella Giannachi, University of Exeter
This article, developed in conversation with ChatGPT and GPT-4, explores how artists have represented human-machine AI entanglements by using works by Lynn Hershman Leeson, Mario Klingemann, Kate Crawford and Trevor Paglen, and Luca Viganò as case studies.
Full text

March 2023

Chomsky

2022
Back to top

March 2022

Here’s how new technology is bringing home Ukraine’s tragedy
By Steven Zeitchik, Digital Frontiers Reporter, Washington Post
"“Gen Z doesn’t consume media the way that people that came before did."
[24 February 2022, Russo-Ukraine & NATO escalated a war started in 2014]

Excerpt:
The so-called social web of Twitter and Facebook is well established as a place of community, information and, of course, outrage. But new immersive technologies hint at how we might come to connect with war in the decade ahead. If for much of the 20th century radio and television piped war’s horrors into our living rooms, and this century has seen social media posts and video clips bring them to our pockets, these new technologies will wire them directly into our minds.
“There’s something about virtual reality and augmented reality that is very suited to war because VR and AR can convey war’s dilemmas like nothing else,” . . .
“This would be very meaningful to people if they could experience it in VR,”

 

2021
Back to top



July 2021

Kate Crawford, Honorary Professor at the University of Sydney and one of the world's foremost scholars on the social and political implications of artificial intelligence. 29 July 2021


May 2021

ChatGPT’s Altman pleads US Senate for AI rules

17 May 2021
TheOnlineCitizen.com

Excerpt:
WASHINGTON, UNITED STATES — Sam Altman, the chief executive of ChatGPT’s OpenAI, told US lawmakers on Tuesday that regulating artificial intelligence was essential after his poem-writing chatbot stunned the world.
The lawmakers stressed their deepest fears of AI’s developments, with a leading senator opening the hearing on Capitol Hill with a computer-generated voice — which sounded remarkably similar to his own — reading a text written by the bot.
“If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine,” said Senator Richard Blumenthal.
. . .
“If this technology goes wrong, it can go quite wrong,” Altman said.
Tipped as an opportunity to educate lawmakers, Altman used the session to urge Congress to impose new rules on big tech, despite deep political divisions that for years have blocked legislation aimed at regulating the internet.
But governments worldwide are under pressure to move quickly after the release of ChatGPT, a bot that can churn out human-like content in an instant, went viral and both wowed and spooked users... >>>more

March 2021

"ChatGPT"
Moore's Law for Everything

by Sam Altman, CEO for the OpenAI group which created ChatGPT
16 March 2021

Excerpt:
My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.

We need to design a system that embraces this technological future and taxes the assets that will make up most of the value in that world–companies and land–in order to fairly distribute some of the coming wealth. Doing so can make the society of the future much less divisive and enable everyone to participate in its gains.

In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”
This technological revolution is unstoppable. And a recursive loop of innovation, as these smart machines themselves help us make smarter machines, will accelerate the revolution’s pace. Three crucial consequences follow:

1. This revolution will create phenomenal wealth. The price of many kinds of labor (which drives the costs of goods and services) will fall toward zero once sufficiently powerful AI “joins the workforce.”

2. The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.

3. If we get both of these right, we can improve the standard of living for people more than we ever have before.

Because we are at the beginning of this tectonic shift, we have a rare opportunity to pivot toward the future. That pivot can’t simply address current social and political problems; it must be designed for the radically different society of the near future.

Policy plans that don’t account for this imminent transformation will fail for the same reason that the organizing principles of pre-agrarian or feudal societies would fail today.

What follows is a description of what’s coming and a plan for how to navigate this new landscape. . .

Part 1
The AI Revolution
On a zoomed-out time scale, technological progress follows an exponential curve. Compare how the world looked 15 years ago (no smartphones, really), 150 years ago (no combustion engine, no home electricity), 1,500 years ago (no industrial machines), and 15,000 years ago (no agriculture).
The coming change will center around the most impressive of our capabilities: the phenomenal ability to think, create, understand, and reason. To the three great technological revolutions–the agricultural, the industrial, and the computational–we will add a fourth: the AI revolution. This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.
The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel. We have already built AI systems that can learn and do useful things. They are still primitive, but the trendlines are clear.

Part 2
Moore's Law for Everything
Part 3
Capitalism for Everyone
Part 4
Implementation and Troubleshooting
Part 5
Shifting to the New System >>>more

2017
Back to top


October 2017

"Man will only become better when you make him see what he is like." —Anton Chekhov

The Simplistic Debate Over Artificial Intelligence
AI might bring doom, but so might we...if we remain independent.
By Preston Estep Ph.D., Psychology Today
25 October 2017

Excerpt
The disagreement this summer between two currently-reigning U.S. tech titans has brought new visibility to the debate about possible risks of Artificial Intelligence. Tesla and SpaceX CEO Elon Musk has been warning since 2014 about the doomsday potential of runaway AI. Along the way, Musk’s views have been challenged by many but the debate went mainstream when another billionaire celebrity CEO, Facebook CEO Mark Zuckerberg took issue with what he believes to be Musk’s alarmist view.

This tiff of the titans is worth noting because of the high stakes and because their opposing views represent the most common schism over AI. Is it a controllable force for good, or a potentially uncontrollable force that, unless managed properly—or even shut down—places humanity at an unreasonably high risk of doom?
Of course, this debate isn’t new. ...

2015

December 2015

Artificial Intelligence Nonprofit OpenAI Launches With Backing From Elon Musk And Sam Altman
Drew Olanoff, TechCrunch
12 December, 2015

Excerpt:
Today, OpenAI, a nonprofit artificial intelligence research company was announced to the world. Its director, Ilya Sutskever, is a research scientist at Google. This comes a day after Facebook open-sourced its AI hardware.

Its reason for existing was explained in an introductory post:
Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
Former Stripe CTO Greg Brockman is taking the same position for OpenAI. There are quite a few other interesting names involved, including Y Combinator’s Sam Altman and Tesla/SpaceX’s Elon Musk acting as co-chairs:

The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.

The organization is being funded by Altman, Brockman, Musk, Jessica Livingston, Peter Thiel, Amazon Web Services, Infosys and YC Research. Those funders have contributed $1 billion thus far. Musk has been donating money to make sure that AI doesn’t go the way of Skynet, so it’s nice to know that his involvement will have a safety lens on it. >>>more

December 2015

OpenAI
11 December, 2015
Blog Authors:
Greg Brockman
, Ilya Sutskever,
OpenAI
Introducing OpenAI
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

Introduction
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.

We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field. >>>more

December 2015

Facebook makes the hardware it uses for AI open-source
11 December, 2015
'Big Sur' is an open-rack-compatible GPU-accelerated hardware platform for artificial intelligence

By Sean Buckley, Associate Editor, engadget

You might not think of it often, but behind the scenes Facebook uses a lot of artificial intelligence. The company leans heavy on AI, using machine learning to curate a better news feed, sort through photo and video content and even read stories or play games. Now, the company is making Big Sur, the hardware it runs its AI experiments on, open-source.

Facebook says it will release its AI hardware design to the Open Compute Project soon, promising to give the community a system designed specifically for AI tasks built from off-the-shelf components. >>>more

Go to: Benefits of Social Media

Back to top


image
Top of Page