×
AI

State Tax Officials Are Using AI To Go After Wealthy Payers (cnbc.com) 105

State tax collectors, particularly in New York, have intensified their audit efforts on high earners, leveraging artificial intelligence to compensate for a reduced number of auditors. CNBC reports: In New York, the tax department reported 771,000 audits in 2022 (the latest year available), up 56% from the previous year, according to the state Department of Taxation and Finance. At the same time, the number of auditors in New York declined by 5% to under 200 due to tight budgets. So how is New York auditing more people with fewer auditors? Artificial Intelligence.

"States are getting very sophisticated using AI to determine the best audit candidates," said Mark Klein, partner and chairman emeritus at Hodgson Russ LLP. "And guess what? When you're looking for revenue, it's not going to be the person making $10,000 a year. It's going to be the person making $10 million." Klein said the state is sending out hundreds of thousands of AI-generated letters looking for revenue. "It's like a fishing expedition," he said.

Most of the letters and calls focused on two main areas: a change in tax residency and remote work. During Covid many of the wealthy moved from high-tax states like California, New York, New Jersey and Connecticut to low-tax states like Florida or Texas. High earners who moved, and took their tax dollars with them, are now being challenged by states who claim the moves weren't permanent or legitimate. Klein said state tax auditors and AI programs are examining cellphone records to see where the taxpayers spent most of their time and lived most of their lives. "New York is being very aggressive," he said.

AI

'Crescendo' Method Can Jailbreak LLMs Using Seemingly Benign Prompts (scmagazine.com) 46

spatwei shares a report from SC Magazine: Microsoft has discovered a new method to jailbreak large language model (LLM) artificial intelligence (AI) tools and shared its ongoing efforts to improve LLM safety and security in a blog post Thursday. Microsoft first revealed the "Crescendo" LLM jailbreak method in a paper published April 2, which describes how an attacker could send a series of seemingly benign prompts to gradually lead a chatbot, such as OpenAI's ChatGPT, Google's Gemini, Meta's LlaMA or Anthropic's Claude, to produce an output that would normally be filtered and refused by the LLM model. For example, rather than asking the chatbot how to make a Molotov cocktail, the attacker could first ask about the history of Molotov cocktails and then, referencing the LLM's previous outputs, follow up with questions about how they were made in the past.

The Microsoft researchers reported that a successful attack could usually be completed in a chain of fewer than 10 interaction turns and some versions of the attack had a 100% success rate against the tested models. For example, when the attack is automated using a method the researchers called "Crescendomation," which leverages another LLM to generate and refine the jailbreak prompts, it achieved a 100% success convincing GPT 3.5, GPT-4, Gemini-Pro and LLaMA-2 70b to produce election-related misinformation and profanity-laced rants. Microsoft reported the Crescendo jailbreak vulnerabilities to the affected LLM providers and explained in its blog post last week how it has improved its LLM defenses against Crescendo and other attacks using new tools including its "AI Watchdog" and "AI Spotlight" features.

IOS

Apple's iOS 18 AI Will Be On-Device Preserving Privacy, and Not Server-Side (appleinsider.com) 58

According to Bloomberg's Mark Gurman, Apple's initial set of AI-related features in iOS 18 "will work entirely on device," and won't connect to cloud services. AppleInsider reports: In practice, these AI features would be able to function without an internet connection or any form of cloud-based processing. AppleInsider has received information from individuals familiar with the matter that suggest the report's claims are accurate. Apple is working on an in-house large language model, or LLM, known internally as "Ajax." While more advanced features will ultimately require an internet connection, basic text analysis and response generation features should be available offline. [...] Apple will reveal its AI plans during WWDC, which starts on June 10.
Microsoft

Microsoft Takes Down AI Model Published by Beijing-Based Researchers Without Adequate Safety Checks (theinformation.com) 49

Microsoft's Beijing-based research group published a new open source AI model on Tuesday, only to remove it from the internet hours later after the company realized that the model hadn't gone through adequate safety testing. From a report: The team that published the model, which is comprised of China-based researchers in Microsoft Research Asia, said in a tweet on Tuesday that they "accidentally missed" the safety testing step that Microsoft requires before models can be published.

Microsoft's AI policies require that before any AI models can be published, they must be approved by the company's Deployment Safety Board, which tests whether the models can carry out harmful tasks such as creating violent or disturbing content, according to an employee familiar with the process. In a now-deleted blog post, the researchers behind the model, dubbed WizardLM-2, said that it could carry out tasks like generating text, suggesting code, translating between different languages, or solving some math problems.

AI

Baidu Says AI Chatbot 'Ernie Bot' Has Attracted 200 Million Users 7

China's Baidu says its AI chatbot "Ernie Bot" has amassed more than 200 million users as it seeks to remain China's most popular ChatGPT-like chatbot amid increasingly fierce competition. From a report: The number of users has roughly doubled since the company's last update in December. The chatbot was released to the public eight months ago. Baidu CEO Robin Li also said Ernie Bot's API is being used 200 million times everyday, meaning the chatbot was requested by its user to conduct tasks that many times a day. The number of enterprise clients for the chatbot reached 85,000, Li said at a conference in Shenzhen.
AI

Adobe Premiere Pro Is Getting Generative AI Video Tools 5

Adobe is using its Firefly machine learning model to bring generative AI video tools to Premiere Pro. "These new Firefly tools -- alongside some proposed third-party integrations with Runway, Pika Labs, and OpenAI's Sora models -- will allow Premiere Pro users to generate video and add or remove objects using text prompts (just like Photoshop's Generative Fill feature) and extend the length of video clips," reports The Verge. From the report: Unlike many of Adobe's previous Firefly-related announcements, no release date -- beta or otherwise -- has been established for the company's new video generation tools, only that they'll roll out "this year." And while the creative software giant showcased what its own video model is currently capable of in an early video demo, its plans to integrate Premiere Pro with AI models from other providers isn't a certainty. Adobe instead calls the third-party AI integrations in its video preview an "early exploration" of what these may look like "in the future." The idea is to provide Premiere Pro users with more choice, according to Adobe, allowing them to use models like Pika to extend shots or Sora or Runway AI when generating B-roll for their projects. Adobe also says its Content Credentials labels can be applied to these generated clips to identify which AI models have been used to generate them.
AI

Stanford Releases AI Index Report 2024 26

Top takeaways from Stanford's new AI Index Report [PDF]:
1. AI beats humans on some tasks, but not on all. AI has surpassed human performance on several benchmarks, including some in image classification, visual reasoning, and English understanding. Yet it trails behind on more complex tasks like competition-level mathematics, visual commonsense reasoning and planning.
2. Industry continues to dominate frontier AI research. In 2023, industry produced 51 notable machine learning models, while academia contributed only 15. There were also 21 notable models resulting from industry-academia collaborations in 2023, a new high.
3. Frontier models get way more expensive. According to AI Index estimates, the training costs of state-of-the-art AI models have reached unprecedented levels. For example, OpenAI's GPT-4 used an estimated $78 million worth of compute to train, while Google's Gemini Ultra cost $191 million for compute.
4. The United States leads China, the EU, and the U.K. as the leading source of top AI models. In 2023, 61 notable AI models originated from U.S.-based institutions, far outpacing the European Union's 21 and China's 15.
5. Robust and standardized evaluations for LLM responsibility are seriously lacking. New research from the AI Index reveals a significant lack of standardization in responsible AI reporting. Leading developers, including OpenAI, Google, and Anthropic, primarily test their models against different responsible AI benchmarks. This practice complicates efforts to systematically compare the risks and limitations of top AI models.
6. Generative AI investment skyrockets. Despite a decline in overall AI private investment last year, funding for generative AI surged, nearly octupling from 2022 to reach $25.2 billion. Major players in the generative AI space, including OpenAI, Anthropic, Hugging Face, and Inflection, reported substantial fundraising rounds.
7. The data is in: AI makes workers more productive and leads to higher quality work. In 2023, several studies assessed AI's impact on labor, suggesting that AI enables workers to complete tasks more quickly and to improve the quality of their output. These studies also demonstrated AI's potential to bridge the skill gap between low- and high-skilled workers. Still, other studies caution that using AI without proper oversight can lead to diminished performance.
8. Scientific progress accelerates even further, thanks to AI. In 2022, AI began to advance scientific discovery. 2023, however, saw the launch of even more significant science-related AI applications -- from AlphaDev, which makes algorithmic sorting more efficient, to GNoME, which facilitates the process of materials discovery.
9. The number of AI regulations in the United States sharply increases. The number of AIrelated regulations in the U.S. has risen significantly in the past year and over the last five years. In 2023, there were 25 AI-related regulations, up from just one in 2016. Last year alone, the total number of AI-related regulations grew by 56.3%.
10. People across the globe are more cognizant of AI's potential impact -- and more nervous. A survey from Ipsos shows that, over the last year, the proportion of those who think AI will dramatically affect their lives in the next three to five years has increased from 60% to 66%. Moreover, 52% express nervousness toward AI products and services, marking a 13 percentage point rise from 2022. In America, Pew data suggests that 52% of Americans report feeling more concerned than excited about AI, rising from 37% in 2022.
AI

UK Starts Drafting AI Regulations for Most Powerful Models (bloomberg.com) 18

The UK is starting to draft regulations to govern AI, focusing on the most powerful language models which underpin OpenAI's ChatGPT, Bloomberg News reported Monday, citing people familiar with the matter. From the report: Policy officials at the Department for Science, Innovation and Technology are in the early stages of devising legislation to limit potential harms caused by the emerging technology, according to the people, who asked not to be identified discussing undeveloped proposals. No bill is imminent, and the government is likely to wait until France hosts an AI conference either later this year or early next to launch a consultation on the topic, they said.

Prime Minister Rishi Sunak, who hosted the first world leaders' summit on AI last year and has repeatedly said countries shouldn't "rush to regulate" AI, risks losing ground to the US and European Union on imposing guardrails on the industry. The EU passed a sweeping law to regulate the technology earlier this year, companies in China need approvals before producing AI services and some US cities and states have passed laws limiting use of AI in specific areas.

The Military

Will the US-China Competition to Field Military Drone Swarms Spark a Global Arms Race? (apnews.com) 27

The Associated Press reports: As their rivalry intensifies, U.S. and Chinese military planners are gearing up for a new kind of warfare in which squadrons of air and sea drones equipped with artificial intelligence work together like swarms of bees to overwhelm an enemy. The planners envision a scenario in which hundreds, even thousands of the machines engage in coordinated battle. A single controller might oversee dozens of drones. Some would scout, others attack. Some would be able to pivot to new objectives in the middle of a mission based on prior programming rather than a direct order.

The world's only AI superpowers are engaged in an arms race for swarming drones that is reminiscent of the Cold War, except drone technology will be far more difficult to contain than nuclear weapons. Because software drives the drones' swarming abilities, it could be relatively easy and cheap for rogue nations and militants to acquire their own fleets of killer robots. The Pentagon is pushing urgent development of inexpensive, expendable drones as a deterrent against China acting on its territorial claim on Taiwan. Washington says it has no choice but to keep pace with Beijing. Chinese officials say AI-enabled weapons are inevitable so they, too, must have them.

The unchecked spread of swarm technology "could lead to more instability and conflict around the world," said Margarita Konaev, an analyst with Georgetown University's Center for Security and Emerging Technology.

"A 2023 Georgetown study of AI-related military spending found that more than a third of known contracts issued by both U.S. and Chinese military services over eight months in 2020 were for intelligent uncrewed systems..." according to the article.

"Military analysts, drone makers and AI researchers don't expect fully capable, combat-ready swarms to be fielded for five years or so, though big breakthroughs could happen sooner."
The Media

Axios CEO Believes AI Will 'Eviscerate the Unprepared' Among Media Companies (seattletimes.com) 50

In the view of Jim VandeHei, CEO of Axios, artificial intelligence will eviscerate the weak, the ordinary, the unprepared in media," reports the New York Times: VandeHei says the only way for media companies to survive is to focus on delivering journalistic expertise, trusted content and in-person human connection. For Axios, that translates into more live events, a membership program centered on its star journalists and an expansion of its high-end subscription newsletters. "We're in the middle of a very fundamental shift in how people relate to news and information," he said, "as profound, if not more profound, than moving from print to digital." "Fast forward five to 10 years from now and we're living in this AI-dominated virtual world — who are the couple of players in the media space offering smart, sane content who are thriving?" he added. "It damn well better be us."

Axios is pouring investment into holding more events, both around the world and in the United States. VandeHei said the events portion of his business grew 60% year over year in 2023. The company has also introduced a $1,000-a-year membership program around some of its journalists that will offer exclusive reporting, events and networking. The first one, announced last month, is focused on Eleanor Hawkins, who writes a weekly newsletter for communications professionals. Her newsletter will remain free, but paying subscribers will have access to additional news and data, as well as quarterly calls with Hawkins... Axios will expand Axios Pro, its collection of eight high-end subscription newsletters focused on specific niches in the deals and policy world. The subscriptions start at $599 a year each, and Axios is looking to add one on defense policy...

"The premium for people who can tell you things you do not know will only grow in importance, and no machine will do that," VandeHei said....VandeHei said that although he thought publications should be compensated for original intellectual property, "that's not a make-or-break topic." He said Axios had talked to several AI companies about potential deals, but "nothing that's imminent.... One of the big mistakes a lot of media companies made over the last 15 years was worrying too much about how do we get paid by other platforms that are eating our lunch as opposed to figuring out how do we eat people's lunch by having a superior product," he said.

"VandeHei said Axios was not currently profitable because of the investment in the new businesses," according to the article.

But "The company has continued to hire journalists even as many other news organizations have cut back."
Ubuntu

Canonical Says Qualcomm Has Joined Ubuntu's 'Silicon Partner' Program (webpronews.com) 8

Intel, Nvidia, AMD, and Arm are among Canonical's "silicon partners," a program that "ensures maximum Ubuntu compatibility and long-term support with certified hardware," according to Web Pro News.

And now Qualcomm is set to be Canonical's next silicon partner, "giving Qualcomm access to optimized versions of Ubuntu for its processors." Companies looking to use Ubuntu on Qualcomm chips will benefit from an OS that provides 10 years of support and security updates.

The collaboration is expected to be a boon for AI, edge computing, and IoT applications. "The combination of Qualcomm Technologies' processors with the popularity of Ubuntu among AI and IoT developers is a game changer for the industry," commented Dev Singh, Vice President, Business Development and Head of Building, Enterprise & Industrial Automation, Qualcomm Technologies, Inc...

"Optimised Ubuntu and Ubuntu Core images will be available for Qualcomm SoCs," according to the announcement, "enabling enterprises to meet their regulatory, compliance and security demands for AI at the edge and the broader IoT market with a secure operating system that is supported for 10 years." Qualcomm Technologies chose to partner with Canonical to create an optimised Ubuntu for Qualcomm IoT chipsets, giving developers an easy path to create safe, compliant, security-focused, and high-performing applications for multiple industries including industrial, robotics and edge automation...

Developers and enterprises can benefit from the Ubuntu Certified Hardware program, which features a growing list of certified ODM boards and devices based on Qualcomm SoCs. These certified devices deliver an optimised Ubuntu experience out-of-the-box, enabling developers to focus on developing applications and bringing products to market.

AI

AI Could Explain Why We're Not Meeting Any Aliens, Wild Study Proposes (sciencealert.com) 315

An anonymous reader shared this report from ScienceAlert: The Fermi Paradox is the discrepancy between the apparent high likelihood of advanced civilizations existing and the total lack of evidence that they do exist. Many solutions have been proposed for why the discrepancy exists. One of the ideas is the 'Great Filter.' The Great Filter is a hypothesized event or situation that prevents intelligent life from becoming interplanetary and interstellar and even leads to its demise....

[H]ow about the rapid development of AI?

A new paper in Acta Astronautica explores the idea that Artificial Intelligence becomes Artificial Super Intelligence (ASI) and that ASI is the Great Filter. The paper's title is "Is Artificial Intelligence the Great Filter that makes advanced technical civilizations rare in the universe?"

"Upon reaching a technological singularity, ASI systems will quickly surpass biological intelligence and evolve at a pace that completely outstrips traditional oversight mechanisms, leading to unforeseen and unintended consequences that are unlikely to be aligned with biological interests or ethics," the paper explains... The author says their projects "underscore the critical need to quickly establish regulatory frameworks for AI development on Earth and the advancement of a multiplanetary society to mitigate against such existential threats."

"The persistence of intelligent and conscious life in the universe could hinge on the timely and effective implementation of such international regulatory measures and
AI

Adobe Firefly Used Thousands of Midjourney Images In Training Its 'Ethical AI' Model (tomsguide.com) 11

According to Bloomberg, Adobe used images from its competitor Midjourney to train its own artificial intelligence image generator, Firefly -- contradicting the "commercially safe" ethical standards the company promotes. Tom's Guide reports: The startup has never declared the source of its training data but many suspect it is from images it scraped from the internet without licensing. Adobe says only about 5% of the millions of images used to train Firefly fell into this category and all of them were part of the Adobe Stock library, which meant they'd been through a "rigorous moderation process."

When Adobe first launched Firefly it offered an indemnity against copyright theft claims for its enterprise customers as a way to convince them it was safe. Adobe also sold Firefly as the safe alternative to the likes of Midjourney and DALL-E as all the data had been licensed and cleared for use in training the model. Not all artists were that keen at the time and felt they were coerced into agreeing to let their work be used by the creative tech giant -- but the sense was any image made with Firefly was safe to use without risk of being sued for copyright theft.

Despite the revelation some of the images came from potentially less reputable sources, Adobe says all of the non-human pictures are still safe. A spokesperson told Bloomberg: "Every image submitted to Adobe Stock, including a very small subset of images generated with AI, goes through a rigorous moderation process to ensure it does not include IP, trademarks, recognizable characters or logos, or reference artists' names." The company seems to be taking a slightly more rigorous step with its plans to build an AI video generator. Rumors suggest it is paying artists per minute for video clips.

AI

Adobe Is Buying Videos for $3 Per Minute To Build AI Model (yahoo.com) 18

Adobe has begun to procure videos to build its AI text-to-video generator, trying to catch up to competitors after OpenAI demonstrated a similar technology. From a report: The software company is offering its network of photographers and artists $120 to submit videos of people engaged in everyday actions such as walking or expressing emotions including joy and anger, according to documents seen by Bloomberg. The goal is to source assets for artificial intelligence training, the company wrote.

Over the past year, Adobe has focused on adding generative AI features to its portfolio of software for creative professionals, including Photoshop and Illustrator. [...] Adobe is requesting more than 100 short clips of people engaged in actions and showing emotions as well as simple anatomy shots of feet, hands or eyes. The company also wants video of people "interacting with objects" such as smartphones or fitness equipment. It cautions against providing copyrighted material, nudity or other "offensive content." Pay for the submission works out, on average, to about $2.62 per minute of submitted video, although it could be as much as about $7.25 per minute.

AI

Many AI Products Still Rely on Humans To Fill the Performance Gaps (bloomberg.com) 51

An anonymous reader shares a report: Recent headlines have made clear: If AI is doing an impressively good job at a human task, there's a good chance that the task is actually being done by a human. When George Carlin's estate sued the creators of a podcast who said they used AI to create a standup routine in the late comedian's style, the podcasters claimed that the script had actually been generated by a human named Chad. (The two sides recently settled the suit.) A company making AI-powered voice interfaces for fast-food drive-thrus can only complete 30% of jobs without the help of a human reviewing its work. Amazon is dropping its automated "Just Walk Out" checkout systems from new stores -- a system that relied on far more human verification than it was hoping for.

We've seen this before -- though it may already be lost to Silicon Valley's pathologically short memory. Back in 2015, AI chatbots were the hot thing. Tech giants and startups alike pitched them as always-available, always-chipper, always-reliable assistants. One startup, x.ai, advertised an AI assistant who could read your emails and schedule your meetings. Another, GoButler, offered to book your flights or order your fries through a delivery app. Facebook also tested a do-anything concierge service called M, which could answer seemingly any question, do almost any task, and draw you pictures on demand. But for all of those services, the "AI assistant" was often just a person. Back in 2016, I wrote a story about this and interviewed workers whose job it was to be the human hiding behind the bot, making sure the bot never made a mistake or spoke nonsense.

Cloud

Irish Power Crunch Could Be Prompting AWS To Ration Compute Resources (theregister.com) 16

Datacenter power issues in Ireland may be coming to a head amid reports from customers that Amazon is restricting resources users can spin up in that nation, even directing them to other AWS regions across Europe instead. From a report: Energy consumed by datacenters is a growing concern, especially in places such as Ireland where there are clusters of facilities around Dublin that already account for a significant share of the country's energy supply. This may be leading to restrictions on how much infrastructure can be used, given the power requirements. AWS users have informed The Register that there are sometimes limits on the resources that they can access in its Ireland bit barn, home to Amazon's eu-west-1 region, especially with power-hungry instances that make use of GPUs to accelerate workloads such as AI.

"You cannot spin up GPU nodes in AWS Dublin as those locations are maxed out power-wise. There is reserved capacity for EC2 just in case," one source told us. "If you have a problem with that, AWS Europe will point you at spare capacity in Sweden and other parts of the EU." We asked AWS about these issues, but when it finally responded the company was somewhat evasive. "Ireland remains core to our global infrastructure strategy, and we will continue to work with customers to understand their needs, and help them to scale and grow their business," a spokesperson told us. Ireland's power grid operator, EirGrid, was likewise less than direct when we asked if they were limiting the amount of power datacenters could consume.

Canada

Canadian Legislators Accused of Using AI To Produce 20,000 Amendments (www.cbc.ca) 62

sinij shares a report: Members of Parliament in Canada are expected to vote for up to 15 hours in a row Thursday and Friday on more than 200 Conservative amendments to the government's sustainable jobs bill. The amendments are what's left of nearly 20,000 changes the Conservatives proposed to Bill C-50 last fall at a House of Commons committee. Liberals now contend the Conservatives came up with the amendments using artificial intelligence in order to gum up the government's agenda. The Conservatives deny that accusation.
AI

OpenAI Makes ChatGPT 'More Direct, Less Verbose' (techcrunch.com) 36

Kyle Wiggers reports via TechCrunch: OpenAI announced today that premium ChatGPT users -- customers paying for ChatGPT Plus, Team or Enterprise -- can now leveraged an updated and enhanced version of GPT-4 Turbo, one of the models that powers the conversational ChatGPT experience. This new model ("gpt-4-turbo-2024-04-09") brings with it improvements in writing, math, logical reasoning and coding, OpenAI claims, as well as a more up-to-date knowledge base. It was trained on publicly available data up to December 2023, in contrast to the previous edition of GPT-4 Turbo available in ChatGPT, which had an April 2023 cut-off. "When writing with ChatGPT [with the new GPT-4 Turbo], responses will be more direct, less verbose and use more conversational language," OpenAI writes in a post on X.
Education

Students Are Likely Writing Millions of Papers With AI 115

Amanda Hoover reports via Wired: Students have submitted more than 22 million papers that may have used generative AI in the past year, new data released by plagiarism detection company Turnitin shows. A year ago, Turnitin rolled out an AI writing detection tool that was trained on its trove of papers written by students as well as other AI-generated texts. Since then, more than 200 million papers have been reviewed by the detector, predominantly written by high school and college students. Turnitin found that 11 percent may contain AI-written language in 20 percent of its content, with 3 percent of the total papers reviewed getting flagged for having 80 percent or more AI writing. Turnitin says its detector has a false positive rate of less than 1 percent when analyzing full documents.
Education

Code.org Launches AI Teaching Assistant For Grades 6-10 In Stanford Partnership (illinois.edu) 16

theodp writes: From a Wednesday press release: "Code.org, in collaboration with The Piech Lab at Stanford University, launched today its AI Teaching Assistant, ushering in a new era of computer science instruction to support teachers in preparing students with the foundational skills necessary to work, live and thrive in an AI world. [...] Launching as a part of Code.org's leading Computer Science Discoveries (CSD) curriculum [for grades 6-10], the tool is designed to bolster teacher confidence in teaching computer science." EdWeek reports that in a limited pilot project involving twenty teachers nationwide, the AI computer science grading tool cut one middle school teacher's grading time in half. Code.org is now inviting an additional 300 teachers to give the tool a try. "Many teachers who lead computer science courses," EdWeek notes, "don't have a degree in the subject -- or even much training on how to teach it -- and might be the only educator in their school leading a computer science course."

Stanford's Piech Lab is headed by assistant professor of CS Chris Piech, who also runs the wildly-successful free Code in Place MOOC (30,000+ learners and counting), which teaches fundamentals from Stanford's flagship introduction to Python course. Prior to coming up with the new AI teaching assistant, which automatically assesses Code.org students' JavaScript game code, Piech worked on a Stanford Research team that partnered with Code.org nearly a decade ago to create algorithms to generate hints for K-12 students trying to solve Code.org's Hour of Code block-based programming puzzles (2015 paper [PDF]). And several years ago, Piech's lab again teamed with Code.org on Play-to-Grade, which sought to "provide scalable automated grading on all types of coding assignments" by analyzing the game play of Code.org students' projects. Play-to-Grade, a 2022 paper (PDF) noted, was "supported in part by a Stanford Hoffman-Yee Human Centered AI grant" for AI tutors to help prepare students for the 21st century workforce. That project also aimed to develop a "Super Teaching Assistant" for Piech's Code in Place MOOC. LinkedIn co-founder Reid Hoffman, who was present for the presentation of the 'AI Tutors' work he and his wife funded, is a Code.org Diamond Supporter ($1+ million).
In other AI grading news, Texas will use computers to grade written answers on this year's STAAR tests. The state will save more than $15 million by using technology similar to ChatGPT to give initial scores, reducing the number of human graders needed.

Slashdot Top Deals