Collated by Maireid Sullivan
2015 - updated 2025
Work in progress
Minimalism: Take a pattern and make it multiply and mutate as it goes. Like a seed: Human creativity reaps a harvest, but we deserve to know when our 'work' is plagiarised by artificialy generated intelligence: "AI"
"We are allowing the technologists to frame this as a problem that they’re equipped to solve. That’s a lie. People talk about AI as if it will know truth. AI’s not gonna solve these problems. AI cannot solve the problem of fake news. Google doesn’t have the option of saying, “Oh, is this conspiracy? Is this truth?” Because they don’t know what truth is. They don’t have a proxy for truth that’s better than a click."
- Cathy O’Neil, American mathematician, data scientist, and author of the New York Times best-seller Weapons of Math Destruction, 2017, Penguin
"AI refers to the ability of a computer or machine to mimic the competencies of the human mind, which often learns from previous experiences to understand and respond to language, decisions, and problems." - B. Thormundsson
How psychologists kick-started AI by studying the human mind
The Conversation, February 3, 2025 by Chris Ludlow, Lecturer in Psychology, Swinburne University of Technology
and Armita Zarnegar, Lecturer in Computer Science, Swinburne University of Technology
Excerpt
As the science of the mind, psychology has played a pivotal role in shaping artificial intelligence, offering insights into human cognition, learning and behaviour that have profoundly influenced AI’s development.
These contributions not only laid the foundations for AI but also continue to guide its future development. The study of psychology has shaped our understanding of what constitutes intelligence in machines, and how we can address the complex challenges and benefits associated with this technology.
Machines mimicking nature
The origins of modern AI can be traced back to psychology in the mid-20th century. In 1949, psychologist Donald Hebb proposed a model for how the brain learns: connections between brain cells grow stronger when they are active at the same time.
This idea gave a hint of how machines might learn by mimicking nature’s approach.
...
In the 1950s, psychologist Frank Rosenblatt built on Hebb’s theory to develop a system called the perceptron.
The perceptron was the first artificial neural network ever made. . . .
Excerpt
Today we’re launching deep research in ChatGPT, a new agentic capability that conducts multi-step research on the internet for complex tasks.
It accomplishes in tens of minutes what would take a human many hours.
Deep research is OpenAI's next agent that can do work for you independently - you give it a prompt, and ChatGPT will find, analyze, and synthesize hundreds of online sources to create a comprehensive report at the level of a research analyst. Powered by a version of the upcoming OpenAI o3 model that’s optimized for web browsing and data analysis, it leverages reasoning to search, interpret, and analyze massive amounts of text, images, and PDFs on the internet, pivoting as needed in reaction to information it encounters.
The ability to synthesize knowledge is a prerequisite for creating new knowledge. For this reason, deep research marks a significant step toward our broader goal of developing AGI, which we have long envisioned as capable of producing novel scientific research. Why we built deep research...
Excerpt
Releasing the DeepSeek AI V3 model as open source is a blessing. The strategy DeepSeek engineers devised to develop such an efficient AI model is gradually coming to light. Before continuing, it's essential to remember that DeepSeek claims to have trained its model using only 2,048 Nvidia H800 chips.
Some analysts say its infrastructure consists of 50,000 H100 GPUs purchased through intermediaries, although this remains conjecture. The H100 is more powerful than the H800, but it's entirely plausible that DeepSeek had to settle for the second due to US government sanctions preventing Chinese companies from accessing the H100. As of November 2023, Nvidia is also barred from shipping its H800 chip to Chinese customers.
One of the Keys to DeepSeek's Success: PTX
The GPUs of Nvidia aren't the only factor behind its rapid growth over the past five years. The company's compute unified device architecture (CUDA) has played a crucial role. Most AI projects today rely on CUDA, which unifies the compilers and development tools programmers use to write software for Nvidia GPUs. Replacing it in ongoing projects presents challenges.
. . .
DeepSeek's programmers have achieved an engineering feat likely to influence how AI model developers approach their projects. It's tangible proof that China has successfully adapted to the GPU shortage caused by US sanctions. ...
DeepSeek "mission" - "open source, cheap, superior performance"
- Liang Wenfeng (b. 1985-), founder and CEO of DeepSeek AI.
"When he’s not revolutionizing AI, Liang explores deep-sea caves. 'Sometimes, the answers are deep below the surface,' he says.
...And Liang? He turned down a $10B acquisition offer. 'DeepSeek isn’t for sale. It’s a mission,' he said.
- Tomy Chang, China Focus, Jan. 29, 2025
Excerpt
5 years ago, he walked away from Wall Street to chase a dream.
Before DeepSeek, Liang was the genius behind Huanfang Quantitative, an AI-powered hedge fund that crushed Wall Street.
His algorithms predicted market trends with scary accuracy.
But Liang wasn’t satisfied.
He saw a bigger problem to solve: AI for everyone, not just the elite.
. . .
It did not take much for the euphoria over artificial intelligence (ai) to turn into alarm.
In 2021, he made a bold move: buying 10,000 Nvidia H800 chips.
He brought his top hedge fund employees on board—experts at squeezing maximum power from GPUs.
Then in 2023, DeepSeek was born.
A tiny team, relentless execution, and no shortcuts.
To win the AI race, Liang hired dozens of PhDs from China’s top universities: Peking, Tsinghua, Beihang. . . .
Liang isn’t afraid to share.
The company’s transparency fuels innovation, but critics question how they’ve achieved such rapid efficiency.
DeepSeek published their groundbreaking methods in a paper co-authored by 200+ researchers: "DeepSeek-Ri: Incentivising Reasoning Capacity in LLMs via Reinforcement Learning" (submitted to Cornell University, 22 January, 2025)
Excerpt
DeepSeek's AI Assistant, powered by DeepSeek-V3, has overtaken rival ChatGPT to become the top-rated free application available on Apple's App Store in the United States.
. . .
WHO IS BEHIND DEEPSEEK?
DeepSeek is a Hangzhou-based startup whose controlling shareholder is Liang Wenfeng, co-founder of quantitative hedge fund High-Flyer, based on Chinese corporate records.
Liang's fund announced in March 2023 on its official WeChat account that it was "starting again", going beyond trading to concentrate resources on creating a "new and independent research group, to explore the essence of AGI" (Artificial General Intelligence). DeepSeek was created later that year.
. . .
HOW DOES BEIJING VIEW DEEPSEEK?
DeepSeek's success has already been noticed in China's top political circles. On January 20, the day DeepSeek-R1 was released to the public, founder Liang attended a closed-door symposium for businessman and experts hosted by Chinese premier Li Qiang, according to state news agency Xinhua.
Liang's presence at the gathering is potentially a sign that DeepSeek's success could be important to Beijing's policy goal of overcoming Washington's export controls and achieving self-sufficiency in strategic industries like AI. ...
Excerpt
The models, which are available for download from the AI dev platform Hugging Face, are part of a new model family that DeepSeek is calling Janus-Pro. They range in size from 1 billion to 7 billion parameters. Parameters roughly correspond to a model’s problem-solving skills, and models with more parameters generally perform better than those with fewer parameters.
Janus-Pro is under an MIT license, meaning it can be used commercially without restriction. . . .
This will be a defining year for AI. In 2025, I expect Meta AI will be the leading assistant serving more than 1 billion people, Llama 4 will become the leading state of the art model, and we'll build an AI engineer that will start contributing increasing amounts of code to our R&D efforts. To power this, Meta is building a 2GW+ datacenter that is so large it would cover a significant part of Manhattan. We'll bring online ~1GW of compute in '25 and we'll end the year with more than 1.3 million GPUs. We're planning to invest $60-65B in capex this year while also growing our AI teams significantly, and we have the capital to continue investing in the years ahead. This is a massive effort, and over the coming years it will drive our core products and business, unlock historic innovation, and extend American technology leadership. Let's go build!
"DeepSeek has about 50,000 NVIDIA H100s that they can't talk about because of the US export controls that are in place." - Alexander Wang,
Scale AI Founder/CEO joins 'Squawk Box' to discuss the AI landscape in 2025,
-
state of the AI arms race between U.S. and China,
-
impact of the U.S. chip export controls,
-
future of AI development,
-
his thoughts on the $500 billion Stargate project,
-
AI competition in the U.S.,
-
DEI vs. 'MEI' in corporate America, and more.
Abstract
We introduce our first-generation reasoning models, DeepSeek-R1-Zero and DeepSeek-R1. DeepSeek-R1-Zero, a model trained via large-scale reinforcement learning (RL) without supervised fine-tuning (SFT) as a preliminary step, demonstrates remarkable reasoning capabilities. Through RL, DeepSeek-R1-Zero naturally emerges with numerous powerful and intriguing reasoning behaviors. However, it encounters challenges such as poor readability, and language mixing. To address these issues and further enhance reasoning performance, we introduce DeepSeek-R1, which incorporates multi-stage training and cold-start data before RL. DeepSeek-R1 achieves performance comparable to OpenAI-o1-1217 on reasoning tasks. To support the research community, we open-source DeepSeek-R1-Zero, DeepSeek-R1, and six dense models (1.5B, 7B, 8B, 14B, 32B, 70B) distilled from DeepSeek-R1 based on Qwen and Llama.
Full report:
BEIJING, January 13 (Xinhua)
China's Ministry of Commerce on Monday said that China firmly opposes the Biden administration's announcement of restrictions on exports related to artificial intelligence (AI).
The restrictions tighten export controls on AI chips and model parameters further, while extending extraterritorial jurisdiction. They create obstacles and interference for third parties engaged in normal trade with China, a ministry spokesperson said.
Previously, high-tech companies and industry organizations from the United States expressed their dissatisfaction and concern through various channels, arguing that the restrictive measures had been formulated hastily without sufficient discussion and constitute the excessive regulation of the AI sector. Believing that these measures will lead to significant adverse consequences, they have strongly urged the Biden administration to halt their implementation, the spokesperson said.
However, the Biden administration has disregarded industry appeals and insisted on the hasty implementation of these measures. This action exemplifies the generalization of the concept of national security and the misuse of export controls, marking a blatant violation of international multilateral trade rules, according to the spokesperson.
This action has severely hindered normal trade between countries, undermined market rules and international economic order, and affected global technological innovation. It has also damaged the interests of businesses worldwide, including those in the United States, the spokesperson said, adding that China will take necessary measures to firmly safeguard its legitimate rights and interests.
Excerpt:
Artificial intelligence is poised to drive innovation, enhance human productivity and inject trillions of dollars into the global economy. At the forefront of this transformation are open source AI models, like Llama, which empower organizations to use and build upon them for free.
These models allow businesses of all sizes to create innovative new products and tools that benefit individuals, society and the economy – saving time and money in the process. Without open source AI’s cost-effective way of working, these innovations, which are poised to push the world forward in essential areas like job creation, knowledge access and drug research, might not be possible.
Here are a few examples of companies that are using Llama to save time and money. . . >>>more
Excerpt:
Professor Kate Crawford is a world-leading scholar in artificial intelligence, who has built a career studying the social and political implications of AI – while also breaking new ground in creative approaches to the subject.
She is a veteran in the field
While AI might seem like a recent innovation, Crawford has been studying big data, algorithms and machine learning for over 20 years.
Her work focuses on ‘opening the black box of AI’, not only to expose the biases, assumptions, errors, and ideological positions within AI technologies, but also the complex chains of labour and production which underpin them:
‘This means looking at the natural resources that drive it, the energy that it consumes, the hidden labour all along the supply chain, and the vast amounts of data that are extracted from every platform and device that we use every day,’ she has said.
Her art has been shown at MOMA in New York, London’s V&A and Fondazione Prada in Milan
As well as her scholarly and journalistic outputs, Crawford uses artistic data visualisation to communicate the invisible systems and processes that power AI.
Her artistic outputs include Anatomy of an AI System, created with fellow artist and researcher Vladan Joler, which explores the life cycle of an Amazon Echo smart speaker; Training Humans, in collaboration with artist and photographer Trevor Paglen, which examines how images are used to train AI systems; and Calculating Empires: A Genealogy of Technology and Power, 1500-2025, which maps over 500 years of how communication and computation are intertwined with systems of power and control – from the Gutenberg printing press right through to large language learning models.
Crawford has explained that she uses art as part of her research output because discussions about AI are too important to keep within a rarefied academic context: ‘Given the enormous social and political impact, this has to be a set of issues and questions that are as public as possible,’ she has said.
She was a founding member of the feminist collective Deep Lab Formed in 2014, this cyber-feminist collective comprised a diverse group of researchers, artists, writers, engineers, and cultural producers. Their work, which spanned lectures, publications, contemporary art, public programming and performances, aimed to combat discrimination towards marginalised people at the hands of ‘corporate dominance, data mining, government surveillance, and a male-dominated tech field.’ ... >>> more
Excerpts:
The 2024 Nobel Prize in Economics has been awarded to three US-based economists who examined the advantages of democracy and the rule of law, and why they are strong in some countries and not others.
Daron Acemoglu is a Turkish-American economist at the Massachusetts Institute of Technology, Simon Johnson is a British economist at the Massachusetts Institute of Technology and James Robinson is a British-American economist at the University of Chicago.
The citation awards the prize “for studies of how institutions are formed and affect prosperity”, making it an award for research into politics and sociology as much as economics.
... Last year Acemoglu and Johnson published Power and Progress:
Our Thousand-Year Struggle Over Technology and Prosperity.
... In May this year Acemoglu wrote about artificial intelligence, putting forward the controversial position that its effects on productivity would be “nontrivial but modest”, which is another way of saying “tiny”. Its effect on wellbeing might be even smaller and it was unlikely to reduce inequality. . .
September 2024
No, Sam Altman, AI Won’t Solve All of Humanity’s Problems
By Steven Levy, Sep 27, 2024, WIRED
The OpenAI CEO’s recent mini-manifesto argues (again) that AI will make the future impossibly bright. He could use a refresher course on the basics of human behaviour. . .
Excerpt
... in 1956, in a quiet corner of New Hampshire ...
The Dartmouth Summer Research Project on Artificial Intelligence, often remembered as the Dartmouth Conference, kicked off on June 18 and lasted for about eight weeks. It was the brainchild of four American computer scientists – John McCarthy, Marvin Minsky, Nathaniel Rochester and Claude Shannon – and brought together some of the brightest minds in computer science, mathematics and cognitive psychology at the time.
These scientists, along with some of the 47 people they invited, set out to tackle an ambitious goal: to make intelligent machines.
As McCarthy put it in the conference proposal, they aimed to find out “how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans”.
The birth of a field – and a problematic name
The Dartmouth Conference didn’t just coin the term “artificial intelligence”;
it coalesced an entire field of study. It’s like a mythical Big Bang of AI – everything we know about machine learning, neural networks and deep learning now traces its origins back to that summer in New Hampshire.
But the legacy of that summer is complicated.
Artificial intelligence won out as a name over others proposed or in use at the time. Shannon preferred the term “automata studies”, while two other conference participants (and the soon-to-be creators of the first AI program), Allen Newell and Herbert Simon, continued to use “complex information processing” for a few years still.
But here’s the thing: having settled on AI, no matter how much we try, today we can’t seem to get away from comparing AI to human intelligence.
This comparison is both a blessing and a curse. . . .
Abstract
At the core of what defines us as humans is the concept of theory of mind: the ability to track other people’s mental states. The recent development of large language models (LLMs) such as ChatGPT has led to intense debate about the possibility that these models exhibit behaviour that is indistinguishable from human behaviour in theory of mind tasks. Here we compare human and LLM performance on a comprehensive battery of measurements that aim to measure different theory of mind abilities, from understanding false beliefs to interpreting indirect requests and recognizing irony and faux pas. We tested two families of LLMs (GPT and LLaMA2) repeatedly against these measures and compared their performance with those from a sample of 1,907 human participants. Across the battery of theory of mind tests, we found that GPT-4 models performed at, or even sometimes above, human levels at identifying indirect requests, false beliefs and misdirection, but struggled with detecting faux pas. Faux pas, however, was the only test where LLaMA2 outperformed humans. Follow-up manipulations of the belief likelihood revealed that the superiority of LLaMA2 was illusory, possibly reflecting a bias towards attributing ignorance. By contrast, the poor performance of GPT originated from a hyperconservative approach towards committing to conclusions rather than from a genuine failure of inference. These findings not only demonstrate that LLMs exhibit behaviour that is consistent with the outputs of mentalistic inference in humans but also highlight the importance of systematic testing to ensure a non-superficial comparison between human and artificial intelligences.
Excerpt
. . . Today we’re taking the next steps towards open source AI becoming the industry standard. We’re releasing Llama 3.1 405B, the first frontier-level open source AI model, as well as new and improved Llama 3.1 70B and 8B models. In addition to having significantly better cost/performance relative to closed models, the fact that the 405B model is open will make it the best choice for fine-tuning and distilling smaller models.
Beyond releasing these models, we’re working with a range of companies to grow the broader ecosystem. Amazon, Databricks, and NVIDIA are launching full suites of services to support developers fine-tuning and distilling their own models.
. . .
Why Open Source AI Is Good for Developers
. . .
Why Open Source AI Is Good for Meta
. . .
Why Open Source AI Is Good for the World
Excerpt:
In the early 1950s, small, peculiar love letters were pinned up on the walls of the computing lab at the University of Manchester.
Two of them published by Christopher Strachey read:
Darling Sweetheart
You are my avid fellow feeling. My affection curiously clings to your passionate wish. My liking yearns for your heart. You are my wistful sympathy: my tender liking.
Yours beautifully
M U C
Honey Dear
My sympathetic affection beautifully attracts your affectionate enthusiasm. You are my loving adoration: my breathless adoration. My fellow feeling breathlessly hopes for your dear eagerness. My lovesick adoration cherishes your avid ardour.
Yours wistfully
M U C
These are strange love letters, for sure. And the history behind them is even stranger; examples of the world’s first computer-generated writing, they’re signed by MUC, the acronym for the Manchester University Computer. In 1952, decades before ChatGPT started to write students’ essays, before OpenAI’s computer generated writing was integrated into mainstream media outlets, two gay men—Alan Turing and Christopher Strachey—essentially invented AI writing. Alongside Turing, Strachey worked on several experiments with Artificial Intelligence: a computer that could sing songs, one of the world’s first computer games, and an algorithm to write gender-neutral mash notes that screamed with longing.
.
. .
On May 15, 1951, Turing delivered a short radio broadcast titled “Can Digital Computers Think?” for the BBC Home Service. It was a question both he and Strachey were exploring. In his lecture, Turing asks listeners to imagine the computer as a mechanical brain, similar to but not exactly like a human brain. A computer can learn, it can be trained, and with time, Turing said, it can exhibit its own unique form of intelligence. He noted one particular difficulty: the computer can do only what the human programmer stipulates. It lacks free will.
“To behave like a brain seems to involve free will,” Turing continues, “but the behavior of a digital computer, when it has been programmed, is completely determined.”
To solve this problem, he suggests a trick. The computer could use a roulette wheel feature to select variables randomly. Then, the computer would appear to make something original and new by adding in a touch of randomness. . . .
June 2024
According to Statista, the global Artificial Intelligence market will reach 1.8 trillion USD by 2030 - across logistics, healthcare, manufacturing, banking, and retail.
Artificial intelligence evokes images of supercomputer assistants, machines that can think creatively, and, to some, scenes from their favorite sci-fi movie. The reality, despite not being as futuristic, is not far off from this. AI refers to the ability of a computer or machine to mimic the competencies of the human mind, which often learns from previous experiences to understand and respond to language, decisions, and problems. The market for AI technologies is vast, amounting to around 200 billion U.S. dollars in 2023 and is expected to grow well beyond that to over 1.8 trillion U.S. dollars by 2030.
Deep learning is aptly named as it is the subset of machine learning that learns as it operates, and machine learning is in turn a subset of artificial intelligence (AI). Machine learning is a simpler tool, wherein the programmer gives the AI a set of parameters that it follows. Increased utilization of deep learning has the potential to greatly reduce the manual work of programming parameters for AI. Across several industries, greater use of deep learning algorithms is likely to enable more efficient expenditure of programmers’ time and energy. Deep learning is most often found in virtual assistance, voice-enabled remotes, and emerging technologies such as self-driving cars. Its application requires substantial processing power, using GPUs with a high-performance capacity to handle the enormous number of calculations needed.
Excerpt
"Theory of mind—the ability to understand other people’s mental states—is what makes the social world of humans go around. It’s what helps you decide what to say in a tense situation, guess what drivers in other cars are about to do, and empathize with a character in a movie. And according to a new study, the large language models (LLM) that power ChatGPT and the like are surprisingly good at mimicking this quintessentially human trait.
“Before running the study, we were all convinced that large language models would not pass these tests, especially tests that evaluate subtle abilities to evaluate mental states,” says study coauthor Cristina Becchio, a professor of cognitive neuroscience at the University Medical Center Hamburg-Eppendorf in Germany. The results, which she calls “unexpected and surprising,” were published today—somewhat ironically, in the journal Nature Human Behavior".
Nvidia has joined the list of companies facing lawsuits over claims that copyrighted material is being used to train their AI models.
Nvidia is facing legal trouble from a trio of authors, who claim the company used copyrighted books to train one of its AI models.
The US dispute – first reported by Reuters – involves authors Brian Keene, Abdi Nazemian and Stewart O’Nan, who claim their works were included in a dataset used to train NeMo, a Nvidia framework that is designed to build and customise generative AI models.
This dataset was taken down last October for reported copyright infringement. The three authors are seeking damages for copyrighted works that helped train NeMo’s large language models in the last three years, Reuters reports. The copyright dispute was filed with the US District Court for the Northern District of California.
AI v copyright
The case is the latest in the growing issue of copyright infringement for AI companies. Last year saw a number of authors file suits against both OpenAI and Meta, with claims that their AI models used their books as training material.
Those lawsuits claimed the large language models developed by Meta and OpenAI were trained on illegal “shadow libraries” – websites that contain pirated versions of the authors’ books.
Towards the end of 2023, The New York Times stepped into the ring with a high-profile lawsuit against both OpenAI and Microsoft. The media outlet claimed AI models such as ChatGPT have copied and use millions of copyrighted news articles, in-depth investigations and other journalistic work.
In January, OpenAI said it was “surprised and disappointed” by the lawsuit and added the newspaper was “not telling the full story”. It followed up in February with a claim that The New York Times “paid someone to hack OpenAI’s products” to generate “highly anomalous results” used as evidence in its AI copyright case.
In a statement sent to SiliconRepublic.com, Ian Crosby, a partner at Susman Godfrey and lead counsel for The New York Times, noted that OpenAI did not dispute that it copied millions of articles from the media outlet to build its products.
The "Godfather of AI" YouTube, University of Oxford (36:53), 19 February 2024 ‘Will Digital Intelligence Replace Biological Intelligence?’
Professor Geoffrey Hinton, CC, FRS, FRSC, received his PhD in artificial intelligence from Edinburgh in 1978.
Synopsis
Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosyncratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosyncratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one, but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.
The speaker
Geoffrey Hinton was one of the researchers who introduced the backpropagation algorithm and the first to use backpropagation for learning word embeddings. His other contributions to neural network research include Boltzmann machines, distributed representations, time-delay neural nets, mixtures of experts, variational learning and deep learning. His research group in Toronto made major breakthroughs in deep learning that revolutionized speech recognition and object classification.
Geoffrey Hinton is a fellow of the UK Royal Society and a foreign member of the US National Academy of Engineering, the US National Academy of Science and the American Academy of Arts and Sciences. His awards include the David E. Rumelhart prize, the IJCAI award for research excellence, the Killam prize for Engineering, the IEEE Frank Rosenblatt medal, the NSERC Herzberg Gold Medal, the IEEE James Clerk Maxwell Gold medal, the NEC C&C award, the BBVA award, the Honda Prize the Princess of Asturias Award and the Turing Award.
Excerpt
Sam Altman (born April 22, 1985, Chicago, Illinois, U.S.) ... has been compared to tech visionaries, including Steve Jobs and Bill Gates, and is known for his belief that artificial general intelligence (AGI) will be able to do anything that humans can.
. . . OpenAI: ChatGPT and turmoil
In 2015 OpenAI was founded as a nonprofit organization to develop AI for the benefit of humanity. Altman and Tesla CEO Elon Musk were cochairs of the organization. OpenAI started with $1 billion in funding provided by Altman, Musk, American entrepreneur Peter Thiel, and the cloud computing company Amazon Web Services, among others.
At the heart of the founding of OpenAI was the recognition of the power of artificial intelligence and the question of how that power would be used.
In 2019 Altman compared the work of OpenAI to the Manhattan Project, which developed the first atomic bomb, telling The New York Times that the Manhattan Project had been “on the scale of OpenAI— the level of ambition we aspire to.” He is proud to point out that he and J. Robert Oppenheimer share a birthday.
In 2018 Musk told Altman that Musk should run OpenAI so that it could catch up to Google. Altman turned Musk down, and Musk left OpenAI, which put the organization in a difficult position because Musk had been funding its work. Because AI development requires a large amount of computer resources, in 2019 OpenAI created a for-profit company that would fund OpenAI’s work but would be controlled by the nonprofit board.
The for-profit part of OpenAI then partnered with the software company Microsoft to use its cloud computing service Azure, while Microsoft integrated OpenAI software into its products. Microsoft controlled 49 percent of OpenAI. . . >>>more
Introducing Sam Altman, Time Magazine 2023 CEO of the Year
“the year that they started taking AI seriously" Back to top
Excerpts:
We’re speaking exactly one year after OpenAI released ChatGPT, the most rapidly adopted tech product ever. The impact of the chatbot and its successor, GPT-4, was transformative—for the company and the world. “For many people,” Altman says, 2023 was “the year that they started taking AI seriously.” Born as a nonprofit research lab dedicated to building artificial intelligence for the benefit of humanity, OpenAI became an $80 billion rocket ship. Altman emerged as one of the most powerful and venerated executives in the world, the public face and leading prophet of a technological revolution.
. . . “I think that’s the responsibility of capitalism,” Altman says. “You take big swings at things that are important to get done.”
Altman’s pursuit of fusion hints at the staggering scope of his ambition. He’s put $180 million into Retro Biosciences, a longevity startup hoping to add 10 healthy years to the human life-span. He conceived of and helped found Worldcoin, a biometric-identification system with a crypto-currency attached, which has raised hundreds of millions of dollars. Through OpenAI, Altman has spent $10 million seeding the longest-running study into universal basic income (UBI) anywhere in the U.S., which has distributed more than $40 million to 3,000 participants, and is set to deliver its first set of findings in 2024. Altman’s interest in UBI speaks to the economic dislocation that he expects AI to bring—though he says it’s not a “sufficient solution to the problem in any way.”
. . . Altman published a 10-point policy platform, which he dubbed the United Slate, with goals that included lowering housing costs, Medicare for All, tax reform, and ambitious clean-energy targets. He ultimately passed on a career switch. “It was so clear to me that I was much better suited to work on AI,” Altman says, “and that if we were able to succeed, it would be a much more interesting and impactful thing for me to do."
. . . Altman’s beliefs are shaped by the theories of late 19th century political economist Henry George, who combined a belief in the power of market incentives to deliver increasing prosperity with a disdain for those who speculate on scarce assets, like land, instead of investing their capital in human progress. Altman has advocated for a land-value tax—a classic Georgist policy—in recent meetings with world leaders, he says.
Asked on a walk through OpenAI’s headquarters whether he has a vision of the future to help make sense of his various investments and interests, Altman says simply, “Abundance. That’s it.” >>>more
November 2023
What Sam Altman’s Firing Means for the Future of OpenAI
By Steven Levy, Nov. 18, 2023, WIRED
Sam Altman made OpenAI into a powerhouse by adding a profit-seeking arm to its utopian mission. After the board rejected his vision, the company’s remaining leaders must figure out a new path forward.
Excerpt:…
First, it’s important to remember that OpenAI was founded by Altman and Elon Musk to fulfill a mission. “The organization is trying to develop a human-positive AI. And because it’s a nonprofit, it will be owned by the world,”Altman told me in December 2015, just before the project was revealed to the world.
Though it seemed clear that Altman was the primary instigator, he was not yet OpenAI’s leader. But the company was squarely under his bailiwick: OpenAI was to be part of the research wing of the startup incubator Y Combinator, where Altman was CEO. Altman had started the division to chase the dream of using tech to solve the world’s knottiest problems when he became YC’s top executive. The original plan for OpenAI was to gather a relatively low number of the world’s best AI scientists and discover the keys to artificial general intelligence able to outperform humans on every dimension, inside a structure that gave ownership of this unimaginably powerful technology to the people, not giant corporations. >>>more
Excerpt OpenAI is now facing yet another copyright infringement lawsuit over the protected media that it allegedly used to train ChatGPT. Unlike similar complaints, however, the newly filed class-action also names Microsoft as a defendant.
Julian Sancton, the author of 2021’s Madhouse at the End of the Earth, only recently submitted the suit to a New York federal court. As highlighted, the case is one of several levied against OpenAI (which reinstated Sam Altman as CEO today) owing to the alleged unauthorized use of copyrighted works.
. . . “OpenAI and Microsoft have built a business valued into the tens of billions of dollars by taking the combined works of humanity without permission,” the legal text begins, proceeding to explore at relative length the “close” relationship between the entities. “While OpenAI was responsible for designing the calibration and fine-tuning of the GPT models— and thus, the largescale copying of this copyrighted material involved in generating a model programmed to accurately mimic Plaintiff’s and others’ styles— Microsoft built and operated the computer system that enabled this unlicensed copying in the first place,” the document continues.
Beyond this significant difference which could, of course, have major implications down the line other components of the suit resemble those within the above-mentioned complaints. Specifically, Sancton’s action touches upon OpenAIs alleged transition from a non-profit “into a complex (and secretive) labyrinth of for-profit corporate entities,” the sources from which OpenAI allegedly accessed protected media, and the importance of training ChatGPT on quality content.
. . . “While OpenAI's anthropomorphizing of its models is up for debate, at a minimum, humans who learn from books buy them, or borrow them from libraries that buy them, providing at least some measure of compensation to authors and creators, the suit drives home towards its end.
OpenAI does not, and it has usurped authors content for the purpose of creating a machine built to generate the very type of content for which authors would usually be paid." Among other things, Sancton is seeking statutory and compensatory damages, disgorgement of profits, and an order permanently enjoining the described alleged infringement.
November 2023
Why the Godfather of A.I. Fears What He’s Built
Geoffrey Hinton has spent a lifetime teaching computers to learn. Now he worries that artificial brains are better than ours.
By Joshua Rothman
November 13, 2023, The New Yorker
(Published in the print edition of the November 20, 2023, issue, with the headline “Metamorphosis.”)
Excerpt:
In your brain, neurons are arranged in networks big and small. With every action, with every thought, the networks change: neurons are included or excluded, and the connections between them strengthen or fade. This process goes on all the time—it’s happening now, as you read these words—and its scale is beyond imagining. You have some eighty billion neurons sharing a hundred trillion connections or more. Your skull contains a galaxy’s worth of constellations, always shifting.
. . .
There are many reasons to be concerned about the advent of artificial intelligence. It’s common sense to worry about human workers being replaced by computers, for example. But Hinton has joined many prominent technologists, including Sam Altman, the C.E.O. of OpenAI, in warning that A.I. systems may start to think for themselves, and even seek to take over or eliminate human civilization. It was striking to hear one of A.I.’s most prominent researchers give voice to such an alarming view.
“People say, It’s just glorified autocomplete,” he told me...
Excerpt
Demis Hassabis says AI could be one of the most important and beneficial technologies ever
The world must treat the risks from artificial intelligence as seriously as the climate crisis and cannot afford to delay its response, one of the technology’s leading figures has warned.
Speaking as the UK government prepares to host a summit on AI safety, Demis Hassabis said oversight of the industry could start with a body similar to the Intergovernmental Panel on Climate Change (IPCC).
Hassabis, the British chief executive of Google’s AI unit, said the world must act immediately in tackling the technology’s dangers, which included aiding the creation of bioweapons and the existential threat posed by super-intelligent systems.
“We must take the risks of AI as seriously as other major global challenges, like climate change,” he said. “It took the international community too long to coordinate an effective global response to this, and we’re living with the consequences of that now. We can’t afford the same delay with AI.”
Hassabis, whose unit created the revolutionary AlphaFold program that depicts protein structures, said AI could be “one of the most important and beneficial technologies ever invented”.
However, he told the Guardian a regime of oversight was needed and governments should take inspiration from international structures such as the IPCC.
. . . The summit on 1 and 2 November at Bletchley Park, the base for second world war codebreakers including Alan Turing, will focus on the threat of advanced AI systems helping to create bioweapons, carry out crippling cyber-attacks or evading human control. Hassabis will be one of the attenders, along with the chief executives of leading AI firms including OpenAI, the San Francisco-based developer of ChatGPT. . . .
Excerpt:
We investigated the odds ratios (ORs) for various diseases given the difference between the AI-estimated age and chronological age (ie, the difference-age).
Findings
We included 101,296 chest radiographs from 70,248 participants across five institutions.
...
Interpretation
The AI-estimated age using chest radiographs showed a strong correlation with chronological age in the healthy cohorts. Furthermore, in cohorts of individuals with known diseases, the difference between estimated age and chronological age correlated with various chronic diseases. The use of this biomarker might pave the way for enhanced risk stratification methodologies, individualised therapeutic interventions, and innovative early diagnostic and preventive approaches towards age-associated pathologies. ...
Google is testing an artificial intelligence tool that can write news articles, in the latest evidence that the technology has the potential to transform white-collar professions.
The product, known as Genesis, uses AI technology to absorb information such as details of current events and then create news stories. The tool was pitched to the New York Times, the Washington Post, and the Wall Street Journal’s owner, News Corp as a “helpmate”, according to the New York Times.
Google said it was in the early stages of exploring the AI tool, which it said could assist journalists with options for headlines or different writing styles. It stressed that the technology was not intended to replace journalists.
...
Last paragraph
While newsrooms explore the possibility of using AI, an investigation this year by the anti-misinformation outfit NewsGuard found bots were already powering dozens of AI-generated content farms.
…With the rapid evolution of AI models, it’s hard to keep up.
But don’t worry. TechCrunch+ has your back.
GPT-4 was a big update
GPT-4 wipes the floor with GPT-3.5 (i.e., ChatGPT)
The point of this research is… to find methods by which relatively simple AI models can improve themselves based on their “experiences,” for lack of a better word. If we’re going to have robots helping us in our homes, hospitals, and offices, they will need to learn and apply those lessons to future actions.
In this bold reinterpretation of economics and history, the consequence of sole reliance on AI is revealed–as well as what must be done to redirect innovation so it benefits all.
. . . Power and Progress demonstrates the path of technology, and how it might be brought under control. Cutting-edge technological advances can become empowering tools, but not if all major decisions remain in the hands of a few hubristic tech leaders.
Review: An AI challenge only humans can solve by Peter Dizikes MIT News Office, May 17, 2023
Excerpts
Economists Daron Acemoglu and Simon Johnson ask whether the benefits of AI will be shared widely or feed inequality. . . . they examine who reaped the rewards from past innovations and who may gain from AI today, economically and politically.
. . .
Today, AI is a tool of social control for some governments that also creates riches for a small number of people, according to Acemoglu and Johnson. “The current path of AI is neither good for the economy nor for democracy, and these two problems, unfortunately, reinforce each other,” they write.
. . .
What do Acemoglu and Johnson think is deficient about AI? For one thing, they believe the development of AI is too focused on mimicking human intelligence. The scholars are skeptical of the notion that AI mirrors human thinking all told — even things like the chess program AlphaZero, which they regard more as a specialized set of instructions. . . .
May 2023
Internet Archive (IA) - Managing AI's "Hallucination Challenge" Internet Archive founder and digital librarian, Brewster Kahle explains how AI services can become more dependable, reliable, and trustworthy.
Excerpt:
Chatbots, like OpenIA’s ChatGPT, Google’s Bard and others, have a hallucination problem (their term, not ours). It can make something up and state it authoritatively. It is a real problem. But there can be an old-fashioned answer, as a parent might say: “Look it up!”
Imagine for a moment the Internet Archive, working with responsible AI companies and research projects, could automate “Looking it Up” in a vast library to make those services more dependable, reliable, and trustworthy. How?
The Internet Archive and AI companies could offer an anti-hallucination service ‘add-on’ to the chatbots that could cite supporting evidence and counter claims to chatbot assertions by leveraging the library collections at the Internet Archive (most of which were published before generative AI).
By citing evidence for and against assertions based on papers, books, newspapers, magazines, books, TV, radio, government documents, we can build a stronger, more reliable knowledge infrastructure for a generation that turns to their screens for answers. Although many of these generative AI companies are already, or are intending, to link their models to the internet, what the Internet Archive can uniquely offer is our vast collection of “historical internet” content. We have been archiving the web for 27 years, which means we have decades of human-generated knowledge. This might become invaluable in an age when we might see a drastic increase in AI-generated content. So an Internet Archive add-on is not just a matter of leveraging knowledge available on the internet, but also knowledge available on the history of the internet.
Is this possible? We think yes because we are already doing something like this for Wikipedia by hand and with special-purpose robots like Internet Archive Bot Wikipedia communities, and these bots, have fixed over 17 million broken links, and have linked one million assertions to specific pages in over 250,000 books. With the help of the AI companies, we believe we can make this an automated process that could respond to the customized essays their services produce. Much of the same technologies used for the chatbots can be used to mine assertions in the literature and find when, and in what context, those assertions were made.
The result would be a more dependable World Wide Web, one where disinformation and propaganda are easier to challenge, and therefore weaken.
Yes, there are 4 major publishers suing to destroy a significant part of the Internet Archive’s book corpus, but we are appealing this ruling. We believe that one role of a research library like the Internet Archive, is to own collections that can be used in new ways by researchers and the general public to understand their world.
What is required? Common purpose, partners, and money. We see a role for a Public AI Research laboratory that can mine vast collections without rights issues arising. While the collections are significant already, we see collecting, digitizing, and making available the publications of the democracies around the world to expand the corpus greatly.
We see roles for scientists, researchers, humanists, ethicists, engineers, governments, and philanthropists, working together to build a better Internet.
If you would like to be involved, please contact Mark Graham at mark@archive.org. ... more
OPEN MEDIA: What’s happening with the Internet Archive
June 16, 2023
Closing paragraph:
The Internet Archive and the right to access information are vital for a functioning democracy and a free society. We must support the Internet Archive in its fight against corporations who seek to limit our access to knowledge. We must enshrine our right to access information and ensure that it is protected for future generations. The Internet Archive will appeal the court decision that ruled in favour of publishers. Support their fight. ...
MEDIUM: Digital Librarian for and of the World
June 14, 2023
Excerpt:
Brewster Kahle is a Silicon Valley maverick, a man on a mission, and founder of the non-profit Internet Archive. Inducted into the Internet Hall of Fame, he is an original and uncannily insightful deep thinker. Geeky and bespectacled, he never hesitates to speak his mind. It is not surprising that he is a voracious reader, loves books and wants to save them for generations.
Situated within a historic church in the Richmond District in San Francisco, the Internet Archive office may seem like yet another eccentric whim of a Silicon Valley native. Brewster chose the place because it resembles their logo — pillars that look like the Library of Alexandria, which was destroyed by fire in 48 BCE. ...
March 2023
'Framing Humans for AI'
Journal for the Philosophy of Language, Mind and the Arts (JOLMA)
issue 4, 30 March 2023
by Gabriella Giannachi, University of Exeter
This article, developed in conversation with ChatGPT and GPT-4, explores how artists have represented human-machine AI entanglements by using works by Lynn Hershman Leeson, Mario Klingemann, Kate Crawford and Trevor Paglen, and Luca Viganò as case studies.
Full text
Excerpt:
The so-called social web of Twitter and Facebook is well established as a place of community, information and, of course, outrage. But new immersive technologies hint at how we might come to connect with war in the decade ahead. If for much of the 20th century radio and television piped war’s horrors into our living rooms, and this century has seen social media posts and video clips bring them to our pockets, these new technologies will wire them directly into our minds.
“There’s something about virtual reality and augmented reality that is very suited to war because VR and AR can convey war’s dilemmas like nothing else,” . . .
“This would be very meaningful to people if they could experience it in VR,”
Kate Crawford, Honorary Professor at the University of Sydney and one of the world's foremost scholars on the social and political implications of artificial intelligence. 29 July 2021
Excerpt:
WASHINGTON, UNITED STATES — Sam Altman, the chief executive of ChatGPT’s OpenAI, told US lawmakers on Tuesday that regulating artificial intelligence was essential after his poem-writing chatbot stunned the world.
The lawmakers stressed their deepest fears of AI’s developments, with a leading senator opening the hearing on Capitol Hill with a computer-generated voice — which sounded remarkably similar to his own — reading a text written by the bot.
“If you were listening from home, you might have thought that voice was mine and the words from me, but in fact, that voice was not mine,” said Senator Richard Blumenthal.
. . .
“If this technology goes wrong, it can go quite wrong,” Altman said.
Tipped as an opportunity to educate lawmakers, Altman used the session to urge Congress to impose new rules on big tech, despite deep political divisions that for years have blocked legislation aimed at regulating the internet.
But governments worldwide are under pressure to move quickly after the release of ChatGPT, a bot that can churn out human-like content in an instant, went viral and both wowed and spooked users... >>>more
March 2021
"ChatGPT" Moore's Law for Everything
by Sam Altman, CEO for the OpenAI group which created ChatGPT
16 March 2021
Excerpt:
My work at OpenAI reminds me every day about the magnitude of the socioeconomic change that is coming sooner than most people believe. Software that can think and learn will do more and more of the work that people now do. Even more power will shift from labor to capital. If public policy doesn’t adapt accordingly, most people will end up worse off than they are today.
We need to design a system that embraces this technological future and taxes the assets that will make up most of the value in that world–companies and land–in order to fairly distribute some of the coming wealth. Doing so can make the society of the future much less divisive and enable everyone to participate in its gains.
In the next five years, computer programs that can think will read legal documents and give medical advice. In the next decade, they will do assembly-line work and maybe even become companions. And in the decades after that, they will do almost everything, including making new scientific discoveries that will expand our concept of “everything.”
This technological revolution is unstoppable. And a recursive loop of innovation, as these smart machines themselves help us make smarter machines, will accelerate the revolution’s pace. Three crucial consequences follow:
1. This revolution will create phenomenal wealth. The price of many kinds of labor (which drives the costs of goods and services) will fall toward zero once sufficiently powerful AI “joins the workforce.”
2. The world will change so rapidly and drastically that an equally drastic change in policy will be needed to distribute this wealth and enable more people to pursue the life they want.
3. If we get both of these right, we can improve the standard of living for people more than we ever have before.
Because we are at the beginning of this tectonic shift, we have a rare opportunity to pivot toward the future. That pivot can’t simply address current social and political problems; it must be designed for the radically different society of the near future.
Policy plans that don’t account for this imminent transformation will fail for the same reason that the organizing principles of pre-agrarian or feudal societies would fail today.
What follows is a description of what’s coming and a plan for how to navigate this new landscape. . .
Part 1 The AI Revolution
On a zoomed-out time scale, technological progress follows an exponential curve. Compare how the world looked 15 years ago (no smartphones, really), 150 years ago (no combustion engine, no home electricity), 1,500 years ago (no industrial machines), and 15,000 years ago (no agriculture).
The coming change will center around the most impressive of our capabilities: the phenomenal ability to think, create, understand, and reason. To the three great technological revolutions–the agricultural, the industrial, and the computational–we will add a fourth: the AI revolution. This revolution will generate enough wealth for everyone to have what they need, if we as a society manage it responsibly.
The technological progress we make in the next 100 years will be far larger than all we’ve made since we first controlled fire and invented the wheel. We have already built AI systems that can learn and do useful things. They are still primitive, but the trendlines are clear.
Part 2
Moore's Law for Everything
Part 3
Capitalism for Everyone
Part 4
Implementation and Troubleshooting
Part 5
Shifting to the New System >>>more
What is artificial intelligence?
Everything you need to know about the history of AI, what we mean by 'deep learning', and if we can really trust artificial intelligence.
By Dr Peter Bentley, BBC Science Focus, October 27, 2020
Excerpt
Artificial intelligence (AI) has been around since the birth of computers in the 1950s. The original pioneers dreamed of making ‘computer brains’ that could perform the same kinds of tasks as our own brains, such as playing chess or translating languages. But hopes that AI would quickly reach human-level intelligence didn’t come to fruition, and AI soon fell out of favour.
Over the following decades, technology improved at an exponential rate. Computers got faster, the internet was invented, and researchers made new advances in AI algorithms.
In the last decade, AI has started solving many of the problems that we always dreamed it could. This has prompted billions of dollars of investment from companies, governments and financiers, and many major organisations now embrace AI as a core element of their business.
Are there different kinds of AI?
Mention AI, and most people think of ‘deep learning’. This kind of AI is loosely inspired by the way our brains work. It uses lots of computers to simulate large networks of artificial ‘neurons’, which are then trained, typically using humongous amounts of data, until they’ve learned to do what we want them to – for example, understanding speech.
This training is the slow and resource-heavy part. Once trained, even a phone can then run the AI and instantly perform the right function, such as obeying your voice command. Deep learning is just one kind of AI, among thousands of others. ...
Excerpt
The disagreement this summer between two currently-reigning U.S. tech titans has brought new visibility to the debate about possible risks of Artificial Intelligence. Tesla and SpaceX CEO Elon Musk has been warning since 2014 about the doomsday potential of runaway AI. Along the way, Musk’s views have been challenged by many but the debate went mainstream when another billionaire celebrity CEO, Facebook CEO Mark Zuckerberg took issue with what he believes to be Musk’s alarmist view.
This tiff of the titans is worth noting because of the high stakes and because their opposing views represent the most common schism over AI. Is it a controllable force for good, or a potentially uncontrollable force that, unless managed properly—or even shut down—places humanity at an unreasonably high risk of doom?
Of course, this debate isn’t new. ...
June 2017
Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy
By Cathy O’Neil, 2017, Penguin
"We are allowing the technologists to frame this as a problem that they’re equipped to solve. That’s a lie. People talk about AI as if it will know truth. AI’s not gonna solve these problems. AI cannot solve the problem of fake news. Google doesn’t have the option of saying, “Oh, is this conspiracy? Is this truth?” Because they don’t know what truth is. They don’t have a proxy for truth that’s better than a click."
Summary
A former Wall Street data scientist sounds an alarm on the mathematical models that pervade modern life - and threaten to rip apart our social fabric
We live in the age of the algorithm. Increasingly, the decisions that affect our lives - whether we get a job or a loan, how much we pay for insurance - are being made by mathematical models. In theory, this should lead to greater fairness: everyone is judged according to the same rules, and bias is eliminated. But as Cathy O'Neil reveals in this urgent and necessary book, the opposite is true. The models being used today are opaque, unregulated, and incontestable, even when they're wrong. Most troubling, they reinforce discrimination, creating a toxic cocktail for democracy.
Tracing the arc of a person's life, Cathy O'Neil exposes the black box models that shape our future as individuals and as a society. These "weapons of math destruction" score teachers and students, sort CVs, grant or deny loans, evaluate workers, target voters and monitor our health. O'Neilcalls on modellers to take more responsibility for their algorithms and on policy makers to regulate their use. But in the end, it's up to us to become more savvy about the models that govern our lives.
Excerpt:
Today, OpenAI, a nonprofit artificial intelligence research company was announced to the world. Its director, Ilya Sutskever, is a research scientist at Google. This comes a day after Facebook open-sourced its AI hardware.
Its reason for existing was explained in an introductory post:
Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.
Former Stripe CTO Greg Brockman is taking the same position for OpenAI. There are quite a few other interesting names involved, including Y Combinator’s Sam Altman and Tesla/SpaceX’s Elon Musk acting as co-chairs:
The group’s other founding members are world-class research engineers and scientists: Trevor Blackwell, Vicki Cheung, Andrej Karpathy, Durk Kingma, John Schulman, Pamela Vagata, and Wojciech Zaremba. Pieter Abbeel, Yoshua Bengio, Alan Kay, Sergey Levine, and Vishal Sikka are advisors to the group. OpenAI’s co-chairs are Sam Altman and Elon Musk.
The organization is being funded by Altman, Brockman, Musk, Jessica Livingston, Peter Thiel, Amazon Web Services, Infosys and YC Research. Those funders have contributed $1 billion thus far. Musk has been donating money to make sure that AI doesn’t go the way of Skynet, so it’s nice to know that his involvement will have a safety lens on it. >>>more
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
Introduction
OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.
We believe AI should be an extension of individual human wills and, in the spirit of liberty, as broadly and evenly distributed as possible. The outcome of this venture is uncertain and the work is difficult, but we believe the goal and the structure are right. We hope this is what matters most to the best in the field. >>>more
You might not think of it often, but behind the scenes Facebook uses a lot of artificial intelligence. The company leans heavy on AI, using machine learning to curate a better news feed, sort through photo and video content and even read stories or play games. Now, the company is making Big Sur, the hardware it runs its AI experiments on, open-source.