🤖 ChatGPT Mirrors Human Perception; AI Governance Dilemmas; Monkey Controls Robot Arm with Smart Brain
Weekly China AI News from May 8 to May 14
Dear Readers, in this week’s issue, I delved into a fascinating study that explores the influence of body size on human interactions with surrounding objects. Interestingly, the study found parallels between these findings and ChatGPT. A distinguished professor at a top-tier Chinese university recently emphasized the challenges faced in the realm of AI governance. A team of Chinese researchers used the brain-computer interface technology to enable a monkey to control a robotic arm using thought alone.
Weekly News Roundup
🎶 Mandopop singer Stefanie Sun’s music has experienced a resurgence in popularity on Bilibili, China’s top user-generated video streaming platform, due to tech-savvy users employing AI to clone Sun’s voice and incorporate it into Mandopop classics. Read more.
🥸 ByteDance’s Douyin, the Chinese version of TikTok, has implemented new rules mandating creators to label AI-generated content, to help users discern between real and virtual content, following China’s move to regulate deepfake technology and tools similar to ChatGPT. Read more.
🚔 A man in China has been detained by police for allegedly using ChatGPT to create and spread a fake news story about a train crash for profit, marking the country’s first criminal case related to the AI chatbot. Read more.
🚙 Didi’s autonomous driving subsidiary and GAC Aion New Energy Automobile have teamed up to launch the “AIDI Project”, a first-of-its-kind joint venture in China aiming to mass-produce autonomous new-energy vehicles and integrate them into Didi’s network by 2025.
🎨 Visual China released a new generative AI-based creative tool, allowing users to generate derivative works from copyrighted images online. Original creators continue to earn a share of the profits when their adapted images are downloaded and paid for.
🧐 Alibaba’s autonomous driving team has been integrated into Cainiao Network, shifting from a research project under the DAMO Academy to a tech team within Cainiao Network.
AI Thinks Like Us: Study Reveals ChatGPT Perceives Affordances Similarly to Humans
What’s new: Researchers from Tsinghua University and Beijing Normal University discovered that ChatGPT, which lacks a physical body, demonstrated a similar perception of the environment at the human body scale.
What does that mean? In this study, researchers investigated how the size of a human affects how it interacts with objects around it, or “affordance”, the perception of action possibilities. For example, if you see a door handle, you understand that you can pull or push it to open the door. That’s an affordance.
The result is the size of the body influences how humans see these affordances. If you’re a small kid, a high shelf might be something you can’t reach, but if you’re a tall adult, you can. So, your body size can “divide” objects into two groups - those you can interact with, and those beyond your body range.
For example, in this picnic scene, objects of various sizes are divided into two category: Example objects within the normal body size range such as apple, bottle and umbrella, and objects beyond human body range such as bicycles, ladders, and airplane.
So does ChatGPT: What’s interesting is ChatGPT had a similar “boundary” at the scale of a human body. It suggests that this boundary isn’t just unique to physical bodies, but can also apply to something that’s purely digital.
Further experiments with brain imaging (fMRI) showed that our brains only “think” in terms of affordances that fit our body size. This means we tend to only consider objects that we can physically manipulate or interact with.
Why it matters: This research supports the idea of “embodied cognition.” This is the belief that our thoughts and intelligence are deeply tied to our physical bodies and how we interact with the world. In other words, the body we have might shape the way we think. The study suggests that our intelligence might be shaped by the physical limits of our bodies.
Elite University Professor Stresses Challenges of AI Governance Amid Fears of Singularity
Ji Weidong, Professor of Humanity and Social Sciences at Shanghai Jiao Tong University, the Dean of the Chinese Institute of Law and Society, and the President of the Computing Law Branch of the China Computer Federation, voiced his thoughts on the evolving landscape of AI. Professor Ji stressed the importance of proactive measures to navigate the paradoxes that emerge from AI development. Below are highlights. The original op-ed can be found here:
Strong AI refers to AI system whose cognitive and problem-solving capabilities is comparable to, or possibly surpassing, human intelligence. There are conflicting standpoints on how to handle strong AI. One advocates for halting new research until safety is assured - a principle of “precaution first.” The other supports continuing development until harm is proven - a principle of “no need for approval for tech development.” Historically, Europe leans towards the former while the U.S. leans towards the latter.
The emergence of strong AI has sparked various fears among the public and even AI experts. They can be summed up as fears of unemployment, loss of control, and misinformation.
Ji suggests a shift in AI governance from algorithm-focused to model-focused. He proposes that the standard for AI governance should transition from minimizing algorithmic bias to minimizing model misuse. Transparency will be less critical, and predictive accuracy will become paramount.
Centralized registration and review might not be sufficient; more decentralized regulatory mechanisms, like public supervision or encouraging user complaints, should be introduced. Such an approach would enable society to participate in AI regulation.
In 2021, China released a new generation of AI ethical standards, emphasizing trustworthiness, controllability, accountability, and agile response. The country continues to strengthen its governance of AI development activities. A draft of regulations for generative AI services is also open for public comments. It aims to enhance supervision through methods like filing systems, security assessments, expert review procedures, clear responsibility subjects, drafting risk lists, and random checks. However, how these regulations will integrate with existing laws, like the Cybersecurity Law, Data Security Law, and Personal Information Protection Law, requires further deliberation and improvement.
Dialogues with AI models like ChatGPT enhance the self-referential and self-proliferating features of the legal system. If AI-generated data become the primary resource for training AI in the near future, legal communication through large language models like ChatGPT will be completely artificialized. This might lead to a fully closed-loop intelligent judicial system and artificial communication system.
As generative and advanced AI are applied to the highly standardized field of law, communication and judgment about specific cases may seem to be on an infinitely efficient trajectory. However, this might also lock the diversity and developmental mechanism, leading to an inextricable dilemma or a “black hole”. This, in turn, might cause the simplification, stagnation, or even regression of the entire field of law and dispute resolution mechanism.
Chinese Scientists Enable Monkey to Control Robotic Arm with Thought
On May 4, a team of Chinese scientists announced that they have successfully carried out the world’s first brain-computer interface experiment with a monkey. The experiment was led by a research team at Nankai University, with help from the Chinese People's Liberation Army General Hospital and a company called Shanghai Xinwei Medical Technology. They improved on the work done in previous experiments with sheep.
What’s a brain-computer interface? BCI is a technology that can translate brain signals into computer commands, potentially helping people who have difficulty moving their arms or legs, due to conditions like stroke or ALS, to interact with computers or other devices.
How it works: One unique aspect of this experiment is how they collected brain signals. Existing methods of gathering these signals have their drawbacks. The invasive surgery methods involve surgery that can be harmful, as seen in some similar experiments in the U.S., where several monkeys unfortunately died. Non-invasive methods that don’t involve surgery aren’t as accurate because they can be affected by the size of the brain.
In this study, researchers used the interventional brain-computer interface where they injects a sensor to the blood vessels in the monkey’s brain, allowing them to accurately collect brain signals. As a result, the monkey could control a mechanical arm by thinking.
More explanation: An interventional brain-computer interface is established through a minimally invasive intervention, where a small puncture is made in a blood vessel, and a brain-machine connection is achieved through a minimally invasive surgery similar to cardiac stent intervention. This method causes less trauma than invasive brain-computer interfaces and provides higher signal quality than non-invasive brain-computer interfaces.
Dr. Feng Duan, the Professor of Nankai University and the team lead, said this study has progressed the interventional BCI from forward-looking laboratory research to clinical applications.
Trending Research
Progressive-Hint Prompting Improves Reasoning in Large Language Models
PandaLM: Reproducible and Automated Language Model Assessment
WebCPM: Interactive Web Search for Chinese Long-form Question Answering