Facebook

Zuckerberg Says Meta's AI Systems Have Begun Improving Themselves, And Developing Superintelligence is Now in Sight (meta.com) 122

Mark Zuckerberg said Wednesday that Meta's AI systems have begun improving themselves over the past few months, calling the development "slow for now, but undeniable" and declaring that superintelligence is now within reach. The Meta CEO staked out the company's vision in a blog post for what he termed "personal superintelligence" -- AI that helps individuals achieve their goals rather than replacing human work entirely.

Zuckerberg drew a sharp line between Meta's approach and that of other companies in the field, arguing that competitors want superintelligence "directed centrally towards automating all valuable work, and then humanity will live on a dole of its output." Meta's version would give people their own superintelligent assistants that know them deeply and help them create, experience adventures, and become better friends.

Zuckerberg envisions smart glasses as the primary computing device, understanding context through what users see and hear throughout their day. The next few years represent a critical juncture, Zuckerberg wrote, calling the rest of this decade "the decisive period for determining the path this technology will take."
Google

Google Execs Say Employees Have To 'Be More AI-Savvy' 88

An anonymous reader quotes a report from CNBC: Google executives are pushing employees to act with more urgency in their use of artificial intelligence as the company looks for ways to cut costs. That was the message at an all-hands meeting last week, featuring CEO Sundar Pichai and Brian Saluzzo, who runs the teams building the technical foundation for Google's flagship products. "Anytime you go through a period of extraordinary investment, you respond by adding a lot of headcount, right?" Pichai said, according to audio obtained by CNBC. "But in this AI moment, I think we have to accomplish more by taking advantage of this transition to drive higher productivity. [...] We are competing with other companies in the world," Pichai said at the meeting. "There will be companies which will become more efficient through this moment in terms of employee productivity, which is why I think it's important to focus on that." [...]

"We are going to be going through a period of much higher investment and I think we have to be frugal with our resources, and I would strive to be more productive and efficient as a company," Pichai said, adding that he's "very optimistic" about how Google is doing. At the meeting, Saluzzo highlighted a number of tools the company is building for software engineers, or SWEs, to help "everybody at Google be more AI-savvy." "We feel the urgency to really quickly and urgently get AI into more of the coding workflows to address top needs so you see a much more rapid increase in velocity," Saluzzo said. Saluzzo said Google has a portfolio of AI products available to employees "so folks can go faster." He mentioned an internal site called "AI Savvy Google" which has courses, toolkits and learning sessions, including some for individual product areas.

Google's engineering education team, which develops courses for internal and external use, partnered with DeepMind on a training called "Building with Gemini" that the company will start promoting soon, Saluzzo said. He also referenced a new internal AI coding tool called Cider that helps software engineers with various aspects of the development process. Since May, when the company first introduced Cider, 50% of users tap the service on a weekly basis, Saluzzo said. Regarding Google's internal AI tools, Saluzzo said that employees should "expect them to continuously get better" and that "they'll become a pretty integral part of most SWE work."
AI

Cheyenne To Host Massive AI Datacenter Using More Electricity Than All Wyoming Homes Combined (apnews.com) 51

An anonymous reader quotes a report from Ars Technica: An artificial intelligence data center that would use more electricity than every home in Wyoming combined before expanding to as much as five times that size will be built soon near Cheyenne, according to the city's mayor. "It's a game changer. It's huge," Mayor Patrick Collins said Monday. With cool weather -- good for keeping computer temperatures down -- and an abundance of inexpensive electricity from a top energy-producing state, Wyoming's capital has become a hub of computing power. The city has been home to Microsoft data centers since 2012. An $800 million data center announced last year by Facebook parent company Meta Platforms is nearing completion, Collins said.

The latest data center, a joint effort between regional energy infrastructure company Tallgrass and AI data center developer Crusoe, would begin at 1.8 gigawatts of electricity and be scalable to 10 gigawatts, according to a joint company statement. A gigawatt can power as many as 1 million homes. But that's more homes than Wyoming has people. The least populated state, Wyoming, has about 590,000 people. And it's a major exporter of energy. A top producer of coal, oil and gas, Wyoming ranks behind only Texas, New Mexico and Pennsylvania as a top net energy-producing state, according to the U.S. Energy Information Administration.

Accounting for fossil fuels, Wyoming produces about 12 times more energy than it consumes. The state exports almost three-fifths of the electricity it produces, according to the EIA. But this proposed data center is so big, it would have its own dedicated energy from gas generation and renewable sources, according to Collins and company officials. [...] While data centers are energy-hungry, experts say companies can help reduce their effect on the climate by powering them with renewable energy rather than fossil fuels. Even so, electricity customers might see their bills increase as utilities plan for massive data projects on the grid. The data center would be built several miles (kilometers) south of Cheyenne off U.S. 85 near the Colorado state line. State and local regulators would need to sign off on the project, but Collins was optimistic construction could begin soon. "I believe their plans are to go sooner rather than later," Collins said.

Youtube

YouTube Rolls Out Age-Estimation Tech To Identify US Teens, Apply Additional Protections 37

YouTube is rolling out age-estimation technology in the U.S. to identify teen users in order to provide a more age-appropriate experience. TechCrunch reports: When YouTube identifies a user as a teen, it introduces new protections and experiences, which include disabling personalized advertising, safeguards that limit repetitive viewing of certain types of content, and enabling digital well-being tools such as screen time and bedtime reminders, among others. These protections already exist on YouTube, but have only been applied to those who verified themselves as teens, not those who may have withheld their real age. [...]

If the new system incorrectly identifies a user as under 18 when they are not, YouTube says the user will be given the option to verify their age with a credit card, government ID, or selfie. Only users who have been directly verified through this method or whose age has been inferred to be over 18 will be able to view the age-restricted content on the platform. The machine learning-powered technology will begin to roll out over the next few weeks to a small set of U.S. users and will then be monitored before rolling out more widely, the company says. [...]

YouTube isn't sharing specifics about the signals it's using to infer a user's age, but notes that it will look at some data like the YouTube activity and the longevity of a user's account to make a determination if the user is under 18. The new system will apply only to signed-in users, as signed-out users already cannot access age-restricted content, and will be available across platforms, including web, mobile, and connected TV.
Operating Systems

Linux 6.16 Brings Faster File Systems, Improved Confidential Memory Support, and More Rust Support (zdnet.com) 50

ZDNet's Steven Vaughan-Nichols shares his list of "what's new and improved" in the latest Linux 6.16 kernel. An anonymous reader shares an excerpt from the report: First, the Rust language is continuing to become more well-integrated into the kernel. At the top of my list is that the kernel now boasts Rust bindings for the driver core and PCI device subsystem. This approach will make it easier to add new Rust-based hardware drivers to Linux. Additionally, new Rust abstractions have been integrated into the Direct Rendering Manager (DRM), particularly for ioctl handling, file/GEM memory management, and driver/device infrastructure for major GPU vendors, such as AMD, Nvidia, and Intel. These changes should reduce vulnerabilities and optimize graphics performance. This will make gamers and AI/ML developers happier.

Linux 6.16 also brings general improvements to Rust crate support. Crate is Rust's packaging format. This will make it easier to build, maintain, and integrate Rust kernel modules into the kernel. For those of you who still love C, don't worry. The vast majority of kernel code remains in C, and Rust is unlikely to replace C soon. In a decade, we may be telling another story. Beyond Rust, this latest release also comes with several major file system improvements. For starters, the XFS filesystem now supports large atomic writes. This capability means that large multi-block write operations are 'atomic,' meaning all blocks are updated or none. This enhances data integrity and prevents data write errors. This move is significant for companies that use XFS for databases and large-scale storage.

Perhaps the most popular Linux file system, Ext4, is also getting many improvements. These boosts include faster commit paths, large folio support, and atomic multi-fsblock writes for bigalloc filesystems. What these improvements mean, if you're not a file-system nerd, is that we should see speedups of up to 37% for sequential I/O workloads. If your Linux laptop doubles as a music player, another nice new feature is that you can now stream your audio over USB even while the rest of your system is asleep. That capability's been available in Android for a while, but now it's part of mainline Linux.

If security is a top priority for you, the 6.16 kernel now supports Intel Trusted Execution Technology (TXT) and Intel Trusted Domain Extensions (TDX). This addition, along with Linux's improved support for AMD Secure Encrypted Virtualization and Secure Memory Encryption (SEV-SNP), enables you to encrypt your software's memory in what's known as confidential computing. This feature improves cloud security by encrypting a user's virtual machine memory, meaning someone who cracks a cloud can't access your data.
Linux 6.16 also delivers several chip-related upgrades. It introduces support for Intel's Advanced Performance Extensions (APX), doubling x86 general-purpose registers from 16 to 32 and boosting performance on next-gen CPUs like Lunar Lake and Granite Rapids Xeon. Additionally, the new CONFIG_X86_NATIVE_CPU option allows users to build processor-optimized kernels for greater efficiency.

Support for Nvidia's AI-focused Blackwell GPUs has also been improved, and updates to TCP/IP with DMABUF help offload networking tasks to GPUs and accelerators. While these changes may go unnoticed by everyday users, high-performance systems will see gains and OpenVPN users may finally experience speeds that challenge WireGuard.
AI

Cisco Donates the AGNTCY Project to the Linux Foundation 7

Cisco has donated its AGNTCY initiative to the Linux Foundation, aiming to create an open-standard "Internet of Agents" to allow AI agents from different vendors to collaborate seamlessly. The project is backed by tech giants like Google Cloud, Dell, Oracle and Red Hat. "Without such an interoperable standard, companies have been rushing to build specialized AI agents," writes ZDNet's Steven Vaughan-Nichols. "These work in isolated silos that cannot work and play well with each other. This, in turn, makes them less useful for customers than they could be." From the report: AGNTCY was first open-sourced by Cisco in March 2025 and has since attracted support from over 75 companies. By moving it under the Linux Foundation's neutral governance, the hope is that everyone else will jump on the AGNTCY bandwagon, thus making it an industry-wide standard. The Linux Foundation has a long history of providing common ground for what otherwise might be contentious technology battles. The project provides a complete framework to solve the core challenges of multi-agent collaboration:

- Agent Discovery: An Open Agent Schema Framework (OASF) acts like a "DNS for agents," allowing them to find and understand the capabilities of others.
- Agent Identity: A system for cryptographically verifiable identities ensures agents can prove who they are and perform authorized actions securely across different vendors and organizations.
- Agent Messaging: A protocol named Secure Low-latency Interactive Messaging (SLIM) is designed for the complex, multi-modal communication patterns of agents, with built-in support for human-in-the-loop interaction and quantum-safe security.
- Agent Observability: A specialized monitoring framework provides visibility into complex, multi-agent workflows, which is crucial for debugging probabilistic AI systems.

You may well ask, aren't there other emerging AI agency standards? You're right. There are. These include the Agent2Agent (A2A) protocol, which was also recently contributed to the Linux Foundation, and Anthropic's Model Context Protocol (MCP). AGNTCY will help agents using these protocols discover each other and communicate securely. In more detail, it looks like this: AGNTCY enables interoperability and collaboration in three primary ways:

- Discovery: Agents using the A2A protocol and servers using MCP can be listed and found through AGNTCY's directories. This enables different agents to discover each other and understand their functions.
- Messaging: A2A and MCP communications can be transported over SLIM, AGNTCY's messaging protocol designed for secure and efficient agent interaction.
- Observability: The interactions between these different agents and protocols can be monitored using AGNTCY's observability software development kits (SDKs), which increase transparency and help with debugging complex workflows
You can view AGNTCY's code and documentary on GitHub.
Education

ChatGPT's New Study Mode Is Designed To Help You Learn, Not Just Give Answers 29

An anonymous reader quotes a report from Ars Technica: The rise of large language models like ChatGPT has led to widespread concern that "everyone is cheating their way through college," as a recent New York magazine article memorably put it. Now, OpenAI is rolling out a new "Study Mode" that it claims is less about providing answers or doing the work for students and more about helping them "build [a] deep understanding" of complex topics.

Study Mode isn't a new ChatGPT model but a series of "custom system instructions" written for the LLM "in collaboration with teachers, scientists, and pedagogy experts to reflect a core set of behaviors that support deeper learning," OpenAI said. Instead of the usual summary of a subject that stock ChatGPT might give -- which one OpenAI employee likened to "a mini textbook chapter" -- Study Mode slowly rolls out new information in a "scaffolded" structure. The mode is designed to ask "guiding questions" in the Socratic style and to pause for periodic "knowledge checks" and personalized feedback to make sure the user understands before moving on. It's unknown how many students will use this guided learning tool instead of just asking ChatGPT to generate answers from the start.

In an early hands-off demo attended by Ars Technica, Study Mode responded to a request to "teach me about game theory" by first asking about the user's overall familiarity with the subject and what they'll be using the information for. ChatGPT introduced a short overview of some core game theory concepts, then paused to ask a question before providing a relevant real-world example. In another example involving a classic "train traveling at speed" math problem, Study Mode resisted multiple simulated attempts by the frustrated "student" to simply ask for the answer and instead tried to gently redirect the conversation to how the available information could be used to generate that answer. An OpenAI representative told Ars that Study Mode will eventually provide direct solutions if asked repeatedly, but the default behavior is more tuned to a Socratic tutoring style.
OpenAI said it drew inspiration for Study Mode from "power users" and collaborated with pedagogy experts and college students to help refine its responses. As for whether the mode can be trusted, OpenAI told Ars that "the risk of hallucination is lower with Study Mode because the model processes information in smaller chunks, calibrating along the way."

The current Study Mode prompt does, however, result in some "inconsistent behavior and mistakes across conversations," the company warned.
AI

Apple Loses Fourth AI Researcher in a Month To Meta 25

Apple has lost its fourth AI researcher in a month to Meta [non-paywalled source], marking the latest setback to the iPhone maker's AI efforts. From a report: Bowen Zhang, a key multimodal AI researcher at Apple, left the company on Friday and is set to join Meta's recently formed superintelligence team, according to people familiar with the matter. Zhang was part of the Apple foundation models group, or AFM, which built the core technology behind the company's AI platform.

Meta previously lured away the leader of the team, Ruoming Pang, with a compensation package valued at more than $200 million, Bloomberg News has reported. Two other researchers from that group -- Tom Gunter and Mark Lee -- also recently joined Meta. AFM is made up of several dozen engineers and researchers across Cupertino, California, and New York. In response to the job offers from Meta and others, Apple has been marginally increasing the pay of its AFM staffers, whether or not they've threatened to leave, said the people, who asked not to be identified because the moves are private. Still, the pay levels pale in comparison with those of rivals.
AI

60% of Americans Use AI for Search, Only 37% for Workplace Tasks, New Poll Finds (apnews.com) 65

60% of American adults use AI to search for information, but far fewer have adopted the technology for workplace productivity, according to a new Associated Press-NORC Center for Public Affairs Research poll. Only 37% of respondents reported using AI for work tasks, while 40% said they use it for brainstorming ideas.

The survey of 1,437 adults, conducted July 10-14, reveals a significant generational gap in AI adoption. Among adults under 30, 74% use AI for information searches and 62% for generating ideas, compared to just 23% of those over 60 who use it for brainstorming. About one-third of Americans use AI for writing emails, creating or editing images, or entertainment purposes. A quarter use it for shopping, while 16% report using AI for companionship -- a figure that rises to 25% among younger adults.
AI

Anthropic Nears Deal To Raise Funding at $170 Billion Valuation (bloomberg.com) 21

An anonymous reader shares a report: Anthropic is nearing a deal to raise as much as $5 billion in a new round of funding that would value the AI startup at $170 billion, according to a person familiar with the matter. Investment firm Iconiq Capital is leading the round, which is expected to total between $3 billion and $5 billion, said the person, who spoke on condition of anonymity to discuss private information.

Anthropic has also been in discussions with the Qatar Investment Authority and Singapore's sovereign fund GIC about participating in the round, the person said. The new financing would mark a significant jump in valuation for the company and cement its status as one of world's leading AI developers. Anthropic was valued at $61.5 billion in a $3.5 billion round led by Lightspeed Venture Partners earlier this year.

Power

AI Boom Sparks Fight Over Soaring Power Costs 88

Utilities across the U.S. are demanding tech companies pay larger shares of electricity infrastructure costs as AI drives unprecedented data center construction, creating tensions over who bears the financial burden of grid upgrades.

Virginia utility Dominion Energy received requests from data center developers requiring 40 gigawatts of electricity by the end of 2024, enough to power at least 10 million homes, and proposed measures requiring longer-term contracts and guaranteed payments. Ohio became one of the first states to mandate companies pay more connection costs after receiving power requests exceeding 50 times existing data center usage.

Tech giants Microsoft, Google, and Amazon plan to spend $80 billion, $85 billion, and $100 billion respectively this year on AI infrastructure, while utilities worry that grid upgrade costs will increase rates for residential customers.

Further reading: The AI explosion means millions are paying more for electricity
Programming

Claude Code Users Hit With Weekly Rate Limits (techcrunch.com) 43

Anthropic will implement weekly rate limits for Claude subscribers starting August 28 to address users running its Claude Code AI programming tool continuously around the clock and to prevent account sharing violations. The new restrictions will affect Pro subscribers paying $20 monthly and Max plan subscribers paying $100 and $200 monthly, though Anthropic estimates fewer than 5% of current users will be impacted based on existing usage patterns.

Pro users will receive 40 to 80 hours of Sonnet 4 access through Claude Code weekly, while $100 Max subscribers get 140 to 280 hours of Sonnet 4 plus 15 to 35 hours of Opus 4. The $200 Max plan provides 240 to 480 hours of Sonnet 4 and 24 to 40 hours of Opus 4. Claude Code has experienced at least seven outages in the past month due to unprecedented demand.
AI

OpenAI's ChatGPT Agent Casually Clicks Through 'I Am Not a Robot' Verification Test 37

An anonymous reader quotes a report from Ars Technica: On Friday, OpenAI's new ChatGPT Agent, which can perform multistep tasks for users, proved it can pass through one of the Internet's most common security checkpoints by clicking Cloudflare's anti-bot verification -- the same checkbox that's supposed to keep automated programs like itself at bay. ChatGPT Agent is a feature that allows OpenAI's AI assistant to control its own web browser, operating within a sandboxed environment with its own virtual operating system and browser that can access the real Internet. Users can watch the AI's actions through a window in the ChatGPT interface, maintaining oversight while the agent completes tasks. The system requires user permission before taking actions with real-world consequences, such as making purchases. Recently, Reddit users discovered the agent could do something particularly ironic.

The evidence came from Reddit, where a user named "logkn" of the r/OpenAI community posted screenshots of the AI agent effortlessly clicking through the screening step before it would otherwise present a CAPTCHA (short for "Completely Automated Public Turing tests to tell Computers and Humans Apart") while completing a video conversion task -- narrating its own process as it went. The screenshots shared on Reddit capture the agent navigating a two-step verification process: first clicking the "Verify you are human" checkbox, then proceeding to click a "Convert" button after the Cloudflare challenge succeeds. The agent provides real-time narration of its actions, stating "The link is inserted, so now I'll click the 'Verify you are human' checkbox to complete the verification on Cloudflare. This step is necessary to prove I'm not a bot and proceed with the action."
Businesses

Tesla Signs $16.5 Billion Contract With Samsung To Make AI Chips 51

An anonymous reader quotes a report from CNBC: Samsung Electronics has entered into a $16.5 billion contract for supplying semiconductors to Tesla, based on a regulatory filing by the South Korean firm and Tesla CEO Elon Musk's posts on X. The memory chipmaker, which had not named the counterparty, mentioned in its filing that the effective start date of the contract was July 26, 2025 -- receipt of orders -- and its end date was Dec. 31, 2033. However, Musk later confirmed in a reply to a post on social media platform X that Tesla was the counterparty.

He also posted: "Samsung's giant new Texas fab will be dedicated to making Tesla's next-generation AI6 chip. The strategic importance of this is hard to overstate. Samsung currently makes AI4.TSMC will make AI5, which just finished design, initially in Taiwan and then Arizona. Samsung agreed to allow Tesla to assist in maximizing manufacturing efficiency. This is a critical point, as I will walk the line personally to accelerate the pace of progress," Musk said on X, and suggested that the deal with Samsung could likely be even larger than the announced $16.5 billion.

Samsung earlier said that details of the deal, including the name of the counterparty, will not be disclosed until the end of 2033, citing a request from the second party "to protect trade secrets," according to a Google translation of the filing in Korean on Monday. "Since the main contents of the contract have not been disclosed due to the need to maintain business confidentiality, investors are advised to invest carefully considering the possibility of changes or termination of the contract," the company said.
Microsoft

Microsoft Adds Copilot Mode To Edge (windows.com) 49

Microsoft today launched Copilot Mode, an experimental feature that transforms Edge into an AI-powered browser experience. Available free for a limited time on Windows and Mac in markets where Copilot operates, the mode places AI at the center of web browsing through a single input interface combining chat, search, and navigation.

The feature enables Copilot to view content across all open browser tabs, handle voice commands, and assist with tasks like comparing websites. Future capabilities will include booking reservations and managing errands through natural language commands. Microsoft has not specified when the free trial ends, though the feature will likely require a Copilot Pro subscription afterward.
China

Chinese Universities Want Students To Use More AI, Not Less (technologyreview.com) 124

Chinese universities are actively encouraging students to use AI tools in their coursework, marking a departure from Western institutions that continue to wrestle with AI's educational role. A survey by the Mycos Institute found that 99% of Chinese university faculty and students use AI tools, with nearly 60% using them multiple times daily or weekly.

The shift represents a complete reversal from two years ago when students were told to avoid AI for assignments. Universities including Tsinghua, Remin, Nanjing, and Fudan have rolled out AI literacy courses and degree programs open to all students, not just computer science majors. The Chinese Ministry of Education released national "AI+ education" guidelines in April 2025 calling for sweeping reforms. Meanwhile, 80% of job openings for fresh graduates now list AI skills as advantageous.
Windows

Windows 11 is a 'Minefield of Micro-aggressions in the Shipping Lane of Progress' (theregister.com) 220

Windows 11 has become indistinguishable from malware because of the way Microsoft has inserted intrusive advertising, AI monitoring features, and constant distractions designed to drive user engagement and monetization to the operating system, argues veteran writer and developer Rupert Goodwins of The Register.

Goodwins contends that Microsoft has transformed Windows 11 into "an ADHD horror show, full of distractions, promotions and snares" where AI features "constantly video what you're doing and send it back to Mother." He applies the term malware to describe software that intervenes in work to advertise and monitors user data, concluding that "for Windows it isn't a class of third-party nasties, it's an edition name."
China

Huawei Shows Off 384-Chip AI Computing System That Rivals Nvidia's Top Product (msn.com) 118

Long-time Slashdot reader hackingbear writes: China's Huawei Technologies showed off an AI computing system on Saturday that can rival Nvidia's most advanced offering, even though the company faces U.S. export restrictions. The CloudMatrix 384 system made its first public debut at the World Artificial Intelligence Conference (WAIC), a three-day event in Shanghai where companies showcase their latest AI innovations, drawing a large crowd to the company's booth. The CloudMatrix 384 incorporates 384 of Huawei's latest 910C chips, optically connected through an all-to-all topology, and outperforms Nvidia's GB200 NVL72 on some metrics, which uses 72 B200 chips, according to SemiAnalysis. A full CloudMatrix system can now deliver 300 PFLOPs of dense BF16 compute, almost double that of the GB200 NVL72. With more than 3.6x aggregate memory capacity and 2.1x more memory bandwidth, Huawei and China "now have AI system capabilities that can beat Nvidia's," according to a report by SemiAnalysis.

The trade-off is that it takes 4.1x the power of a GB200 NVL72, with 2.5x worse power per FLOP, 1.9x worse power per TB/s memory bandwidth, and 1.2x worse power per TB HBM memory capacity, but SemiAnalysis noted that China has no power constraints only chip constraints. Nvidia had announced DGX H100 NVL256 "Ranger" Platform [with 256 GPUs], SemiAnalysis writes, but "decided to not bring it to production due to it being prohibitively expensive, power hungry, and unreliable due to all the optical transceivers required and the two tiers of network. The CloudMatrix Pod requires an incredible 6,912 400G LPO transceivers for networking, the vast majority of which are for the scaleup network."



Also at this event, Chinese e-commerce giant Alibaba released a new flagship open-source reasoning model Qwen3-235B-A22B-Thinking-2507 which has "already topped key industry benchmarks, outperforming powerful proprietary systems from rivals like Google and OpenAI," according to industry reports. On the AIME25 benchmark, a test designed to evaluate sophisticated, multi-step problem-solving skills, Qwen3-Thinking-2507 achieved a remarkable score of 92.3. This places it ahead of some of the most powerful proprietary models, notably surpassing Google's Gemini-2.5 Pro, while Qwen3-Thinking secured a top score of 74.1 at LiveCodeBench, comfortably ahead of both Gemini-2.5 Pro and OpenAI's o4-mini, demonstrating its practical utility for developers and engineering teams.
AI

Is ChatGPT Making You Stupid? (theconversation.com) 196

"Search engines still require users to use critical thinking to interpret and contextualize the results," argues Aaron French, an assistant professor of information systems. But with the rise of generative AI tools like ChatGPT, "internet users aren't just outsourcing memory — they may be outsourcing thinking itself." Generative AI tools don't just retrieve information; they can create, analyze and summarize it. This represents a fundamental shift: Arguably, generative AI is the first technology that could replace human thinking and creativity.

That raises a critical question: Is ChatGPT making us stupid...?

[A]s many people increasingly delegate cognitive tasks to AI, I think it's worth considering what exactly we're gaining and what we are at risk of losing.

"For many, it's replacing the need to sift through sources, compare viewpoints and wrestle with ambiguity," the article argues, positing that this "may be weakening their ability to think critically, solve complex problems and engage deeply with information."

But in a section titled "AI and the Dunning-Kruger effect," he suggests "what matters isn't whether a person uses generative AI, but how. If used uncritically, ChatGPT can lead to intellectual complacency." His larger point seems to be that when used as an aid, AI "can become a powerful tool for stimulating curiosity, generating ideas, clarifying complex topics and provoking intellectual dialogue.... to augment human intelligence, not replace it. That means using ChatGPT to support inquiry, not to shortcut it. It means treating AI responses as the beginning of thought, not the end."

He believes mass adoption of generative AI has "left internet users at a crossroads. One path leads to intellectual decline: a world where we let AI do the thinking for us. The other offers an opportunity: to expand our brainpower by working in tandem with AI, leveraging its power to enhance our own." So his article ends with a question — how will we use AI to make us smarter?

Share your own thoughts and experiences in the comments. Do you think your AI use is making you smarter?
Piracy

Creator of 1995 Phishing Tool 'AOHell' On Piracy, Script Kiddies, and What He Thinks of AI (yahoo.com) 14

In 1995's online world, AOL existed mostly beside the internet as a "walled, manicured garden," remembers Fast Company.

Then along came AOHell "the first of what would become thousands of programs designed by young hackers to turn the system upside down" — built by a high school dropout calling himself "Da Chronic" who says he used "a computer that I couldn't even afford" using "a pirated copy of Microsoft Visual Basic." [D]istributed throughout the teen chatrooms, the program combined a pile of tricks and pranks into a slick little control panel that sat above AOL's windows and gave even newbies an arsenal of teenage superpowers. There was a punter to kick people out of chatrooms, scrollers to flood chats with ASCII art, a chat impersonator, an email and instant message bomber, a mass mailer for sharing warez (and later mp3s), and even an "Artificial Intelligence Bot" [which performed automated if-then responses]. Crucially, AOHell could also help users gain "free" access to AOL. The program came with a program for generating fake credit card numbers (which could fool AOL's sign up process), and, by January 1995, a feature for stealing other users' passwords or credit cards. With messages masquerading as alerts from AOL customer service reps, the tool could convince unsuspecting users to hand over their secrets...

Of course, Da Chronic — actually a 17-year-old high school dropout from North Carolina named Koceilah Rekouche — had other reasons, too. Rekouche wanted to hack AOL because he loved being online with his friends, who were a refuge from a difficult life at home, and he couldn't afford the hourly fee. Plus, it was a thrill to cause havoc and break AOL's weak systems and use them exactly how they weren't meant to be, and he didn't want to keep that to himself. Other hackers "hated the fact that I was distributing this thing, putting it into the team chat room, and bringing in all these noobs and lamers and destroying the community," Rekouche told me recently by phone...

Rekouche also couldn't have imagined what else his program would mean: a free, freewheeling creative outlet for thousands of lonely, disaffected kids like him, and an inspiration for a generation of programmers and technologists. By the time he left AOL in late 1995, his program had spawned a whole cottage industry of teenage script kiddies and hackers, and fueled a subculture where legions of young programmers and artists got their start breaking and making things, using pirated software that otherwise would have been out of reach... In 2014, [AOL CEO Steve] Case himself acknowledged on Reddit that "the hacking of AOL was a real challenge for us," but that "some of the hackers have gone on to do more productive things."

When he first met Mark Zuckerberg, he said, the Facebook founder confessed to Case that "he learned how to program by hacking [AOL]."

"I can't imagine somebody doing that on Facebook today," Da Chronic says in a new interview with Fast Company. "They'll kick you off if you create a Google extension that helps you in the slightest bit on Facebook, or an extension that keeps your privacy or does a little cool thing here and there. That's totally not allowed."

AOHell's creators had called their password-stealing techniques "phishing" — and the name stuck. (AOL was working with federal law enforcement to find him, according to a leaked internal email, but "I didn't even see that until years later.") Enrolled in college, he decided to write a technical academic paper about his program. "I do believe it caught the attention of Homeland Security, but I think they realized pretty quickly that I was not a threat."

He's got an interesting perspective today, noting with today's AI tool's it's theoretically possible to "craft dynamic phishing emails... when I see these AI coding tools I think, this might be like today's Visual Basic. They take out a lot of the grunt work."

What's the moral of the story? "I didn't have any qualifications or anything like that," Da Chronic says. "So you don't know who your adversary is going to be, who's going to understand psychology in some nuanced way, who's going to understand how to put some technological pieces together, using AI, and build some really wild shit."

Slashdot Top Deals