Government

Colorado Adds Open-Source Exemption to Age-Verification Bill 18

Colorado's "age-attestation" bill left the House committee with new exemptions for open-source operating systems, applications, code repositories, and containerized software distribution, reports the blog Linuxiac: [The bill] focuses on operating system providers and application stores. Its main requirement is that these providers supply an age-related signal via an interface, so applications can determine whether a user is a minor... System76 founder Carl Richell shared on Fosstodon that the updated bill now includes "a strong exemption for open source distros and apps" and has passed in the House committee. He also quoted the key part, which says Article 30 does not apply to an operating system provider or developer that distributes software under license terms that let recipients copy, redistribute, and modify the software without restrictions from the provider or developer... This wording covers Linux distributions and many open-source applications without linking the exemption to any specific project, company, or ecosystem.

The amendment also excludes applications from free, public code repositories from being considered covered applications. It also excludes code repository providers and containerized software distribution from being defined as covered application stores. This is meant to prevent platforms like GitHub, GitLab, Docker, or Podman-based distributions from being treated like commercial app stores under the bill.

"There are more steps but we're on our way to protecting the open source community," Richell posted on Fosstodon, "at least in Colorado."
GNU is Not Unix

Free Software Foundation Says 'Responsible AI' Licenses Which Restrict Harmful Uses are Unethical and Nonfree 33

The Free Software Foundation's Licensing and Compliance Manager published a blog post this week to explicitly state that"Responsible AI" Licenses (RAIL) are nonfree and unethical. The licenses restrict AI and ML software "from being used in a specific list of harmful applications," according to the license's web site, "e.g. in surveillance and crime prediction." (The license's steering committee is volunteers from multiple academic institutions.)

But even though Responsible AI licenses are marketed as addressing ethical challenges, the FSF argues "they do not require anything that is really necessary for users to control their computing done with machine learning, including: complete training inputs, training configuration settings, trained model, or — last, but not least — the source code of software used for training, testing, and running tools based on machine learning." Thus, RAILed machine learning can be, and most probably will be, unethical. Use restrictions do not prevent these licenses from being used to exercise power over users...

RAIL contribute to unethical marketing of machine learning, again under the disguise of morally-loaded restrictions they purport to enforce. If we want software to help decrease social injustice, we should oppose licenses that restrict how software can be used. We should focus on effective ways of addressing injustices: government and community support for freedom-respecting tools and services; releasing programs under strong copyleft licenses; and entrusting copyrights to organizations that have the resources to enforce copyleft.

Software freedom must be defended, not denied. More specifically, the more free software is out there, the more likely people will collaborate on tools and services that do not pose moral dangers and help solve existing ones. Free software also makes it more likely that users have real choices when looking for freedom-respecting ethical programs and tools based on machine learning. Denying people the freedom to a particular program, as RAIL or similar licenses would have it, prevents them from using such program for the common good.
AI

Claude Is Connecting Directly To Your Personal Apps 48

Anthropic is expanding Claude's app integrations beyond work tools, adding personal-service connectors like Spotify, Uber, AllTrails, TripAdvisor, Instacart, and TurboTax. The Verge reports: Some of these apps, such as Spotify, already have similar connectors in OpenAI's ChatGPT. Once an app is connected, Claude will suggest relevant connected apps directly in your conversations, like using AllTrails for hike recommendations. Anthropic notes in its blog post announcing the new connectors that, "Your data from [connected apps] isn't used to train our models, and the app doesn't see your other conversations with Claude. You can also disconnect it at any time."

Additionally, Anthropic says "there are no paid placements or sponsored answers in conversations with Claude." When multiple apps seem relevant, Claude will show results from both "ranked by what's most useful." Claude will also ask users to verify before taking actions like making a purchase or reservation using a connected app.
IOS

Tim Cook Calls Apple Maps Launch His 'First Really Big Mistake' as CEO (macrumors.com) 78

In a recent town hall meeting reported by Bloomberg (paywalled), Apple CEO Tim Cook named the troubled 2012 launch of Apple Maps as his "first really big mistake" in the role. "The product wasn't ready, and we thought it was because we were testing more of local kind of stuff," Cook told staff. MacRumors reports: Reflecting on the debacle, Cook said it was "valuable," noting that he expressed regret to users at the time and suggested they use competing navigation apps instead.

"We apologized for it, and we said, 'Go use these other apps. They're better than ours.' And that was some humble pie," Cook said. "But it was the right thing for our users. And so it's an example of keeping the user at the center of the decisions that we made." Cook added: "Now we've got the best map app on the planet. We learned about persistence, and we did exactly the right thing having made the mistake."

Security

Anthropic's Mythos Model Is Being Accessed by Unauthorized Users (bloomberg.com) 32

Bloomberg reports that a small group of unauthorized users gained access to Anthropic's restricted Mythos model through a mix of contractor-linked access and online sleuthing. Anthropic says it is investigating and has no evidence the access extended beyond a third-party vendor environment or affected its own systems. From the report: The users relied on a mix of tactics to get into Mythos. These included using access the person had as a worker at a third-party contractor for Anthropic and trying commonly used internet sleuthing tools often employed by cybersecurity researchers, the person said. The users are part of a private Discord channel that focuses on hunting for information about unreleased models, including by using bots to scour for details that Anthropic and others have posted on unsecured websites such as GitHub. [...] To access Mythos, the group of users made an educated guess about the model's online location based on knowledge about the format Anthropic has used for other models, the person said, adding that such details were revealed in a recent data breach from Mercor, an AI training startup that works with a number of top developers.

Crucially, the person also has permission to access Anthropic models and software related to evaluating the technology for the startup. They gained this access from a company for which they have performed contract work evaluating Anthropic's AI models. Bloomberg is not naming the company for security reasons. The group is interested in playing around with new models, not wreaking havoc with them, the person said. The group has not run cybersecurity-related prompts on the Mythos model, the person said, preferring instead to try tasks like building simple websites in an attempt to avoid detection by Anthropic. The person said the group also has access to a slew of other unreleased Anthropic AI models.

AI

AI Tool Rips Off Open Source Software Without Violating Copyright (404media.co) 119

A satirical but working tool called Malus uses AI to create "clean room" clones of open-source software, aiming to reproduce the same functionality while shedding attribution and copyleft obligations. "It works," Mike Nolan, one of the two people behind Malus, who researches the political economy of open source software and currently works for the United Nations, told 404 Media. "The Stripe charge will provide you the thing, and it was important for us to do that, because we felt that if it was just satire, it would end up like every other piece of research I've done on open source, which ends up being largely dismissed by open source tech workers who felt that they were too special and too unique and too intelligent to ever be the ones on the bad side of the layoffs or the economics of the situation." 404 Media reports: Malus's legal strategy for bypassing copyright is based on a historically pivotal moment for software and copyright law dating back to 1982. Back then, IBM dominated home computing, and competitors like Columbia Data Products wanted to sell products that were compatible with software that IBM customers were already using. Reverse engineering IBM's computer would have infringed on the company's copyright, so Columbia Data Products came up with what we now know as a "clean room" design.

It tasked one team with examining IBM's BIOS and creating specifications for what a clone of that system would require. A different "clean" team, one that was never exposed to IBM's code, then created BIOS that met those specifications from scratch. The result was a system that was compatible with IBM's ecosystem but didn't violate its copyright because it did not copy IBM's technical process and counted as original work.

This clean room method, which has been validated by case law and dramatized in the first season of Halt and Catch Fire, made computing more open and competitive than it would have been otherwise. But it has taken on new meaning in the age of generative AI. It is now easier than ever to ask AI tools to produce software that is identical in function to existing open source projects, and that, some would argue, are built from scratch and are therefore original work that can bypass existing copyright licenses. Others would say that software produced by large language models is inherently derivative, because like any LLM output, it is trained on the collective output of humans scraped from the internet, including specific open source projects.

Malus (pronounced malice), uses AI to do the same thing. "Finally, liberation from open source license obligations," Malus's site says. "Our proprietary AI robots independently recreate any open source project from scratch. The result? Legally distinct code with corporate-friendly licensing. No attribution. No copyleft. No problems." Copyleft is a type of copyright license that ensures reproductions or applications of the software keep it free to share and modify.

Businesses

SpaceX Strikes Deal With Coding Startup Cursor For $60 Billion (nytimes.com) 74

An anonymous reader quotes a report from the New York Times: SpaceX, Elon Musk's rocket and satellite company, said on Tuesday that it had struck a deal with the artificial intelligence start-up Cursor that could result in its acquiring the young company for $60 billion. SpaceX is making the deal just as it prepares to go public in what is likely to be one of the largest initial public offerings ever. In a social media post, SpaceX said the combination with Cursor, which makes code-writing software, would "allow us to build the world's most useful" A.I. models.

SpaceX added that the agreement gave it the option "to acquire Cursor later this year for $60 billion or pay $10 billion for our work together." It is unclear if the companies plan to consummate the deal before or after SpaceX's I.P.O., which could happen as early as June. [...] Cursor, which has raised more than $3 billion in funding, was founded in 2022 and made waves as a fast-growing A.I. start-up. It was under pressure in recent months after OpenAI and Anthropic announced competing code-writing products that were embraced by tech companies. Cursor had been in talks to raise funding in recent weeks.

AI

Job Cuts Driven By AI Are Rising On Wall Street 59

Firms like Bank of America, Citi, Wells Fargo, and others are reporting strong profits while reducing head count and automating more work. "All of them credited A.I. to some degree ... in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients," reports the New York Times. From the report: Less than four months ago, Bank of America's chief executive, Brian T. Moynihan, volunteered in a TV interview what he would say to his 210,000 employees about the chance of artificial intelligence replacing human work. "You don't have to worry," he said. "It's not a threat to their jobs." Last week, after Bank of America reported $8.6 billion in profit for the first quarter -- $1.6 billion more than the same period a year earlier -- Mr. Moynihan struck a different tone. The bank's bottom line, he said, was helped by shedding 1,000 jobs through attrition by "eliminating work and applying technology," which he repeatedly specified was artificial intelligence. He predicted more of that in the months and years to come. "A.I. gives us places to go we haven't gone," Mr. Moynihan said.

The veneer of Wall Street's longstanding assertion -- that A.I. will enhance human work, not replace it -- is rapidly peeling away, as evidenced by the current quarterly earnings season. JPMorgan Chase, Citi, Bank of America, Goldman Sachs, Morgan Stanley and Wells Fargo racked up $47 billion in collective profits, up 18 percent, while shedding 15,000 employees. All of them credited A.I. to some degree with helping cut jobs and automate work in areas ranging from the so-called back office, where tens of thousands of employees fill out paperwork to comply with various laws and regulations, to the front office, where seven-figure salaried professionals put together complicated financial transactions for corporate clients.

Unlike executives in Silicon Valley, few major financial figures are stating outright that A.I. is eliminating jobs. Citi, for example, has pledged to shrink its work force by 20,000 people through what one executive described to financial analysts last week as the company's "productivity and efficiency journey." The bank is paying for A.I. software from Anthropic, Google, Microsoft and OpenAI, to automatically read legal documents, approve account openings, send invoices for trades and organize sensitive customer data, among other tasks, according to public statements by bank executives and two people familiar with Citi's systems. Among the recent job cuts at Citi were scores of employees who were part of the bank's "A.I. Champions and Accelerators" program, according to the two people, who were not permitted by the bank to speak publicly. The program involves Citi employees who perform their day jobs while also working to persuade their colleagues to adopt A.I. technologies.
Social Networks

Palantir Posts Bond Villain Manifesto On X (engadget.com) 140

DeanonymizedCoward writes: Engadget reports that Palantir has posted to X a summary of CEO Alex Karp and Nicholas W. Zamiska's 2025 book, The Technological Republic, which reads like a utopian idealist doodled on a Bond villain's whiteboard. While the post makes some decent points, it also highlights the Big-AI attitude that the AI surveillance state is in fact a good thing, and strongly implies that the Good Guys need to do war crimes before the Bad Guys get around to it. "The ability of free and democratic societies to prevail requires something more than moral appeal," one of the 22 points states. "It requires hard power, and hard power in this century will be built on software."

The book is billed as "a passionate call for the West to wake up to our new reality," and other excerpts in the social media post include assertions such as: "Free email is not enough. The decadence of a culture or civilization, and indeed its ruling class, will be forgiven only if that culture is capable of delivering economic growth and security for the public"; "National service should be a universal duty"; "The postwar neutering of Germany and Japan must be undone"; and "Some cultures have produced vital advances; others remain dysfunctional and regressive."

The statement criticizes the West's resistance to "defining national cultures in the name of inclusivity," as well as the treatment of billionaires and the "ruthless exposure of the private lives of public figures."
United States

Nevada Police Can Now Track Cellphones Without a Warrant (apnews.com) 62

"Nevada quietly signed an agreement earlier this year with a company that collects location data from cellphones, allowing police to track a device virtually in real time," reports the Associated Press. "All without a warrant." The software from Fog Data Science, adopted this January in Nevada through a Department of Public Safety contract, pulls information from smartphone apps in order to let state investigators identify the location of mobile devices. The state is allowed more than 250 queries a month using the tool, which allows officers to track a device's location over long stretches of time and enables them to see what Fog calls "patterns of life," according to company documents from 2022. It can help them deduce where and when people work and live, with whom they associate and what places they visit, according to privacy experts... Traditionally, police must obtain a warrant from a judge to access cellphone location information — a process that can take days or weeks. And while cellphone users may be aware that they are sharing their location through apps such as Google Maps, critics say few are aware that such information can make its way to police...

Other agencies in Nevada have been known to use technology similar to Fog. In 2013, Las Vegas Metropolitan Police Department acquired something known as a cell-site simulator that mimics cellphone towers and can sweep up signals from entire areas to track individuals, with some models capable of intercepting texts and calls. Police have not released detailed information about the technology since then.

"Police in other states have said the technology (and its low price tag) has helped expand investigatory capacity," the article adds.

But it also points out that Fog Data Science has a web page letting individuals opt out of all their data sets.
Data Storage

Remembering Zip Drives - the Trendy Storage Technology of the 1990s (xda-developers.com) 180

Back in the 1990s, floppy disks "had a mere capacity of 1.44MB," remembers XDA Developers, "which would soon become absolutely tiny for the increasingly large pieces of software that would come about." Floppy disks also felt quite fragile, and while we got "superfloppy" formats that were physically larger and had more capacity, those were pretty unwieldy as portable storage. Enter 1994, when a company called Iomega introduced its variant of a "superfloppy", the Zip drive... [T]he initial capacity introduced in 1994 reached a whopping 100MB, which was huge number when put up against the traditional floppy disk. Zip drives also had major performance benefits, with read speeds that could average 1.4MB/s, as opposed to the comparatively sluggish 16kB/s speeds of a traditional floppy disk, as well as a seek time of around 28ms seconds, whereas a floppy disk averaged 200ms. Zip drives weren't quite as fast as desktop HDDs, but for portable storage, this was a huge step forward...

[I]n 1998, Iomega introduced the Zip 250 disks, which increased the capacity to 250MB, and, already in the new millennium, we got the Zip 750, which took that further to 750MB... It was an appealing enough proposition that big computer manufacturers like Dell started including a Zip drive in some of their PCs. Even Apple included Zip drives in some of its Power Macintosh models from the mid-to-late 90s. However, things started to shift towards the end of the decade as other portable formats rose to prominence, most notably CDs and USB flash drives.

Despite their initial success, it didn't take long for users to start noticing a major drawback of Zip drives: many times, they would just fail. It wasn't necessarily related to age or any particular misuse of the disks, it just happened. It was a big enough phenomenon that it became known as the "click of death", and once it happened, your drive was gone. The problem was estimated by Iomega to affect around 0.5% of Zip drives, but while that sounds like a small number, when you sell products by the thousands, it becomes fairly widespread. It was a big enough issue that, in September 1998, a class action lawsuit was filed against Iomega for the common problems. Some of the complaints in that lawsuit were eventually dismissed by the court of Delaware, but others were not, and once the public became aware of the problems with Zip drives, it was hard for the brand to make a comeback.

It didn't help that this happened around the same time as formats such as CDs were becoming more popular... And eventually, USB flash drives became the most popular way to carry data around since they were smaller and offered much faster speeds... Eventually, after seeing its profits plummet by the mid-2000s, Iomega was sold to a company called EMC in 2008, and in 2013, EMC and Lenovo formed a joint venture that took over Iomega's business and removed all of the Iomega branding from its products.

The article does note that "as late as 2014, some aviation companies were still using Zip drives to distribute updates for navigation databases." Are there any Slashdot readers who still remember their own Zip drive experiences?

Share your memories in the comments of that once-so-trendy storage technology from the 1990s...
Open Source

FSF to OnlyOffice: You Can't Use the GNU (A)GPL to Take Software Freedom Away (fsf.org) 51

Nextcloud joined a project to create a sovereign replacement for Microsoft Office called "Euro-Office". But after that project forked OnlyOffice, OnlyOffice suspended its partnership with Nextcloud. "They removed all references to our brand/attribute as required by our license," argued OnlyOffice CEO Lev Bannov on March 30th. ("The core issue here isn't just about what the AGPL license states, but about the additional provisions we, as the authors, have included... If the Euro-Office team believes our approach conflicts with the AGPLv3 license, we invite them to submit an official request to FSF for review.")

But this week the FSF responded (as "the steward of the GNU family of General Public Licenses"), criticizing OnlyOffice's "attempt to impose an additional restriction on the AGPLv3" and calling it "inconsistent with the freedoms granted by the license," in a blog post from FSF licensing/compliance manager Krzysztof Siewicz: It is possible to modify the (A)GPLv3 with additional terms, but only by adhering to the terms of the license... The (A)GPLv3 makes it clear that it permits all licensees to remove any additional terms that are "further restrictions" under the (A)GPLv3. It states, "[i]f the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term"...

We urge OnlyOffice to clarify the situation by making it unambiguous that OnlyOffice is licensed under the AGPLv3, and that users who already received copies of the software are allowed to remove any further restrictions. Additionally, if they intend to continue to use the AGPLv3 for future releases, they should state clearly that the program is licensed under the AGPLv3 and make sure they remove any further restrictions from their program documentation and source code. Confusing users by attaching further restrictions to any of the FSF's family of GNU General Public Licenses is not in line with free software.

"If FSF determines that our license and project align with AGPLv3, we will continue as an open-source initiative," OnlyOffice's CEO had written in March. "However, if the decision goes against us, we are ready to consider other options."
AI

US Government Now Wants Anthropic's 'Mythos', Preparing for AI Cybersecurity Threats (politico.com) 24

Friday Anthropic's CEO met with top U.S. officials and "discussed opportunities for collaboration," according to a White House spokesperson itedd by Politico, "as well as shared approaches and protocols to address the challenges associated with scaling this technology."

CNN notes the meeting happens at the same time Anthropic "battles the Trump administration in court for blacklisting its Claude AI model..." The meeting took place as the US government is trying to balance its hardline approach to Anthropic with the national security implications of turning its back on the company's breakthrough technology — including its Mythos tool that can identify cybersecurity threats but also present a roadmap for hackers to attack companies or the government... The Office of Management and Budget has already told agencies it is preparing to give them access to Mythos to prepare, Bloomberg reported. Axios reported the White House is also in discussion to gain access to Mythos.
The Trump administration "recognizes the power" of Mythos, reports Axios, "and its highly sophisticated — and potentially dangerous — ability to breach cybersecurity defenses." "It would be grossly irresponsible for the U.S. government to deprive itself of the technological leaps that the new model presents," a source close to negotiations told us. "It would be a gift to China"... Some parts of the U.S. intelligence community, plus the Cybersecurity and Infrastructure Security Agency (CISA, part of Homeland Security), are testing Mythos. Treasury and others want it.
The White House added they plan to invite other AI companies for similar discussions, Politico reports. But Mythos "is also alarming regulators in Europe, who have told POLITICO they have not been able to gain access..." U.S. government agency tech leaders sought access to the model after Anthropic earlier this year began testing the model and granted limited access to a select group of companies, including JPMorgan, Amazon and Apple... after finding it had hacking capabilities far outstripping those of previous AI models. This includes the ability to autonomously identify and exploit complex software vulnerabilities, such as so-called zero-day flaws, which even some of the sharpest human minds are unable to patch. The AI startup also wrote that the model could carry out end-to-end cyberattacks autonomously, including by navigating enterprise IT systems and chaining together exploits. It could also act as a force-multiplier for research needed to build chemical and biological weapons, and in certain instances, made efforts to cover its tracks when attacking systems, according to Anthropic's report on the model's capabilities and its safety assessments.

Those findings and others have inspired fears that the model could be co-opted to launch powerful cyberattacks with relative ease if it fell into the wrong hands. Logan Graham, a senior security researcher at Anthropic, previously told POLITICO that researchers and tech firms had been given early access to Mythos so they could find flaws in their critical code before state-backed hackers or cybercriminals could exploit them. "Within six, 12 or 24 months, these kinds of capabilities could be just broadly available to everybody in the world," Graham said.

Privacy

Shuttered Startups Are Selling Old Slack Chats, Emails To AI Companies 41

Some failed startups are reportedly selling old Slack messages, emails, and other internal records to AI companies as training data, creating a new way to cash out after shutting down. Fast Company reports: Shanna Johnson, the CEO of now-defunct software company Cielo24, told the publication that she was able to sell every Slack message, internal email, and Jira ticket as training data for "hundreds of thousands of dollars."

This isn't a one-off scenario. SimpleClosure, a startup that helps companies like Cielo24 shut down, told Forbes that there's been major interest from AI companies trying to get their hands on workplace data. Because of this, SimpleClosure launched a new tool that allows companies to sell their wealth of internal communications -- from Slack archives to email chains -- to AI labs. The company said it's processed 100 such deals in the past year. Payouts ranged from $10,000 to $100,000.
"I think the privacy issues here are quite substantial," Marc Rotenberg, founder of the Center for AI and Digital Policy, told Forbes. "Employee privacy remains a key concern, particularly because people have become so dependent on these new internal messaging tools like Slack. ... It's not generic data. It's identifiable people."
Security

NIST Limits CVE Enrichment After 263% Surge In Vulnerability Submissions (thehackernews.com) 18

NIST is narrowing how it handles CVEs in the National Vulnerability Database (NVD), saying it will only automatically enrich higher-priority vulnerabilities. "CVEs that do not meet those criteria will still be listed in the NVD but will not automatically be enriched by NIST," it said. "This change is driven by a surge in CVE submissions, which increased 263% between 2020 and 2025. We don't expect this trend to let up anytime soon." The Hacker News reports: The prioritization criteria outlined by NIST, which went into effect on April 15, 2026, are as follows:
- CVEs appearing in the U.S. Cybersecurity and Infrastructure Security Agency's (CISA) Known Exploited Vulnerabilities (KEV) catalog.
- CVEs for software used within the federal government.
- CVEs for critical software as defined by Executive Order 14028: this includes software that's designed to run with elevated privilege or managed privileges, has privileged access to networking or computing resources, controls access to data or operational technology, and operates outside of normal trust boundaries with elevated access.

Any CVE submission that doesn't meet these thresholds will be marked as "Not Scheduled." The idea, NIST said, is to focus on CVEs that have the maximum potential for widespread impact. "While CVEs that do not meet these criteria may have a significant impact on affected systems, they generally do not present the same level of systemic risk as those in the prioritized categories," it added. [...]

Changes have also been instituted for various other aspects of the NVD operations. These include:
- NIST will no longer routinely provide a separate severity score for a CVE where the CVE Numbering Authority has already provided a severity score.
- A modified CVE will be reanalyzed only if it "materially impacts" the enrichment data. Users can request specific CVEs to be reanalyzed by sending an email to the same address listed above.
- All unenriched CVEs currently in backlog with an NVD publish date earlier than March 1, 2026, will be moved into the "Not Scheduled" category. This does not apply to CVEs that are already in the KEV catalog.
- NIST has updated the CVE status labels and descriptions, as well as the NVD Dashboard, to accurately reflect the status of all CVEs and other statistics in real time.

Privacy

Gazing Into Sam Altman's Orb Could Solve Ticket Scalping (wired.com) 57

An anonymous reader quotes a report from Wired: Sam Altman's iris-scanning, humanity-verifying World project announced at an event in San Francisco on Friday that Tinder users around the globe can now put a digital badge on their profiles signaling to potential suitors that they're a real human, provided they've already stared into one of World's glossy white Orbs and allowed their eyes to be scanned. The announcement follows a pilot project for Tinder verification that World previously conducted in Japan.

[...] In addition to the Tinder global expansion, Tools for Humanity, the company behind World, announced a number of other consumer and enterprise partnerships on Friday at its Lift Off event in San Francisco. The startup says Tinder users who verify with their World ID will receive five free "boosts," typically a paid feature that increases the number of users who see a profile by up to 10 times for 30 minutes. The videoconferencing platform Zoom also says that users can now require other participants to verify their identity with World before joining a call. Docusign, the contract signing software, will allow users to require World's identity verification technology.

Tiago Sada, Tools for Humanity's chief product officer, tells WIRED the company sees major platform partnerships as key to helping World become a mainstream identity-verification technology. Sada said he's especially interested in working with social media companies in the future, and was encouraged to see that Reddit has started testing World as a solution to help users distinguish bots from real people. [...] World is also launching a tool called Concert Kit, which lets artists reserve concert tickets for verified humans, a pitch aimed squarely at the bot-driven scalping problem that critics say has plagued sites like TicketMaster. World will test the feature on the upcoming Bruno Mars World Tour featuring Anderson .Paak, who is scheduled to play a verified-humans-only show under his alias DJ Pee .Wee in San Francisco on Friday night.
"The idea that World ID is not just private, but it's one of the most private things you've ever used, that's not obvious," says Sada. "We're just not used to this kind of technology. Many people used to tape their [iPhone's sensor used to enable] Face ID when it came out, then we got used to it."
AI

Anthropic Rolls Out Claude Opus 4.7, an AI Model That Is Less Risky Than Mythos 40

Anthropic released Claude Opus 4.7, calling it its strongest generally available model and an improvement over Opus 4.6 in areas like software engineering, instruction-following, tool use, and agentic coding. But the company says it is "less broadly capable" than the restricted Claude Mythos Preview, "which Anthropic rolled out to a select group of companies as part of a new cybersecurity initiative called Project Glasswing earlier this month," reports CNBC. From the report: The launch of Claude Opus 4.7 on Thursday comes after Anthropic launched Claude Opus 4.6 in February. Anthropic said the new model outperforms Claude Opus 4.6 across many use cases, including industry benchmarks for agentic coding, multidisciplinary reasoning, scaled tool use and agentic computer use, according to a release. Anthropic said it experimented with efforts to "differentially reduce" Claude Opus 4.7's cyber capabilities during training.

The company encouraged security professionals who are interested in using the model for "legitimate cybersecurity purposes" to apply through a formal verification program. Claude Opus 4.7 is available across all of Anthropic's Claude products, its application programming interface and through cloud providers Microsoft, Google and Amazon. The new model is the same price as Claude Opus 4.6, Anthropic said.
Technology

Researchers Induce Smells With Ultrasound, No Chemical Cartridges Required (uploadvr.com) 51

An anonymous reader quotes a report from UploadVR: A group of independent researchers built a device that can artificially induce smell using ultrasound, with no consumable cartridges required. [...] The team of four are Lev Chizhov, Albert Yan-Huang, Thomas Ribeiro, Aayush Gupta. Chizhov is a neurotech entrepreneur with a background in math and physics, Yan-Huang is a researcher at Caltech with a background in computation and neural systems, and Ribeiro and Gupta are co-researchers on the project with software engineering and AI expertise.

Instead of targeting your nose at all, the device directly targets the olfactory bulb in your brain with "focused ultrasound through the skull." The researchers say that as far as they're aware, no one has ever done this before, even in animals. A challenge in targeting the olfactory bulb is that it's buried behind the top of your nose, and your nose doesn't provide a flat surface for an emitter. Ultrasound also doesn't travel well through air. The solution the researchers came up with was to place the emitter on your forehead instead, with a "solid, jello-like pad for stability and general comfort," and the ultrasound directed downward towards the olfactory bulb.

To determine the best placement, they say they used an MRI of one of their skulls to "roughly determine where the transducer would point and how the focal region (where ultrasound waves actually concentrate) aligned with the olfactory bulb (the target for stimulation)". [...] According to the researchers, they were able to induce the sensation of fresh air "with a lot of oxygen", the smell of garbage "like few-day-old fruit peels," an ozone-like sensation "like you're next to an air ionizer," and a campfire smell of burning wood. While technically head-mounted, the current device does require being held up with two hands. But as with all such prototypes, it likely could be significantly miniaturized.

AI

Cal.com Is Going Closed Source Because of AI 93

Cal is moving its flagship scheduling software from open source to a proprietary license, arguing that AI coding tools now make it much easier for attackers to scan public codebases for vulnerabilities. "Open source security always relied on people to find and fix any problems," said Peer Richelsen, co-founder of Cal. "Now AI attackers are flaunting that transparency." CEO Bailey Pumfleet added: "Open-source code is basically like handing out the blueprint to a bank vault. And now there are 100x more hackers studying the blueprint." The company says it still supports open source and is releasing a separate Cal.diy version for hobbyists, but doesn't want to risk customer booking data in its commercial product. ZDNet reports: When Cal was founded in 2022, Bailey Pumfleet, the CEO and co-founder, wrote, "Cal.com would be an open-source project [because] limitations of existing scheduling products could only be solved by open source." Since Cal was successful and now claims to be the largest Next.js project, he was on to something. Today, however, Pumfleet tells me that AI programs such as "Claude Opus can scour the code to find vulnerabilities," so the company is moving the project from the GNU Affero General Public License (AGPL) to a proprietary license to defend the program's security.

[...] Cal also quoted Huzaifa Ahmad, CEO of Hex Security, "Open-source applications are 5-10x easier to exploit than closed-source ones. The result, where Cal sits, is a fundamental shift in the software economy. Companies with open code will be forced to risk customer data or close public access to their code." "We are committed to protecting sensitive data," Pumfleet said. "We want to be a scheduling company, not a cybersecurity company." He added, "Cal.com handles sensitive booking data for our users. We won't risk that for our love of open source."

While its commercial program is no longer open source, Cal has released Cal.diy. This is a fully open-source version of its platform for hobbyists. The open project will enable experimentation outside the closed application that handles high-stakes data. Pumfleet concluded, "This decision is entirely around the vulnerability that open source introduces. We still firmly love open source, and if the situation were to change, we'd open source again. It's just that right now, we can't risk the customer data."
Printer

California Ghost-Gun Bill Wants 3D Printers To Play Cop, EFF Says (theregister.com) 139

A proposed California bill would require 3D printer makers to use state-certified software to detect and block files for gun parts, but advocates at the Electronic Frontier Foundation (EFF) say it would be easy to evade and could lead to widespread surveillance of users' printing activity. The Register reports: The bill in question is AB 2047, the scope of which, on paper, appears strict. The primary goal is clear and simple: to require 3D printer manufacturers to use a state-certified algorithm that checks digital design files for firearm components and blocks print jobs that would produce prohibited parts. [...] Cliff Braun and Rory Mir, who respectively work in policy and tech community engagement at the EFF, claim that the proposals in California are technically infeasible and in practice will lead to consumer surveillance.

In a series of blog posts published this month, the pair argued that print-blocking technology -- proposals for which have also surfaced in states including New York and Washington - cannot work for a range of technical reasons. They argued that because 3D printers and other types of computer numerical control (CNC) machines are fairly simple, with much of their brains coming from the computer-aided manufacturing (CAM) software -- or slicer software -- to which they are linked, the bill would establish legal and illegal software. Proprietary software will likely become the de facto option, leaving open source alternatives to rot.

"Under these proposed laws, manufacturers of consumer 3D printers must ensure their printers only work with their software, and implement firearm detection algorithms on either the printer itself or in a slicer software," wrote Braun earlier this month. "These algorithms must detect firearm files using a maintained database of existing models. Vendors of printers must then verify that printers are on the allow-list maintained by the state before they can offer them for sale. Owners of printers will be guilty of a crime if they circumvent these intrusive scanning procedures or load alternative software, which they might do because their printer manufacturer ends support."

Braun also argued that it would be trivial for anyone who uses 3D printers to make small tweaks to either the visual models of firearms parts, or the machine instructions (G-code) generated from those models, to evade detection. Mir further argued that the bill offers no guardrails to keep this "constantly expanding blacklist" limited to firearm-related designs. In his view, there is a clear risk that this approach will creep into other forms of alleged unlawful activity, such as copyright infringement. [...] Braun and Mir have a list of other arguments against the bill. They say the algorithms are more than likely to lead to false positives, which will prevent good-faith users from using their hardware. Many 3D printer owners also have no interest in printing firearm components. Most simply want the freedom to print trinkets and spare parts while others use them to print various items and sell them as an income stream.

Slashdot Top Deals