AI

Google's NotebookLM AI Can Now 'Discover Sources' For You 6

Google's NotebookLM has added a new "Discover sources" feature that allows users to describe a topic and have the AI find and curate relevant sources from the web -- eliminating the need to upload documents manually. "When you tap the Discover button in NotebookLM, you can describe the topic you're interested in, and NotebookLM will bring back a curated collection of relevant sources from the web," says Google software engineer Adam Bignell. Click to add those sources to your notebook; "it's a fast and easy way to quickly grasp a new concept or gather essential reading on a topic." PCMag reports: You can still add your files. NotebookLM can ingest PDFs, websites, YouTube videos, audio files, Google Docs, or Google Slides and summarize, transcribe, narrate, or convert into FAQs and study guides. "Discover sources" helps incorporate information you may not have saved. [...] The imported sources stay within the notebook you created. You can read the entire original document, ask questions about it via chat, or apply other NotebookLM features to it.

Google started rolling out both features on Wednesday. It should be available for all users in about "a week or so." For those concerned about privacy, Google says, "NotebookLM does not use your personal data, including your source uploads, queries, and the responses from the model for training."
There's also an "I'm Feeling Curious" button (a reference to its iconic "I'm feeling lucky" search button) that generates sources on a random topic you might find interesting.
Piracy

Massive Expansion of Italy's Piracy Shield Underway (techdirt.com) 21

An anonymous reader quotes a report from Techdirt: Walled Culture has been following closely Italy's poorly designed Piracy Shield system. Back in December we reported how copyright companies used their access to the Piracy Shield system to order Italian Internet service providers (ISPs) to block access to all of Google Drive for the entire country, and how malicious actors could similarly use that unchecked power to shut down critical national infrastructure. Since then, the Computer & Communications Industry Association (CCIA), an international, not-for-profit association representing computer, communications, and Internet industry firms, has added its voice to the chorus of disapproval. In a letter (PDF) to the European Commission, it warned about the dangers of the Piracy Shield system to the EU economy [...]. It also raised an important new issue: the fact that Italy brought in this extreme legislation without notifying the European Commission under the so-called "TRIS" procedure, which allows others to comment on possible problems [...].

As well as Italy's failure to notify the Commission about its new legislation in advance, the CCIA believes that: this anti-piracy mechanism is in breach of several other EU laws. That includes the Open Internet Regulation which prohibits ISPs to block or slow internet traffic unless required by a legal order. The block subsequent to the Piracy Shield also contradicts the Digital Services Act (DSA) in several aspects, notably Article 9 requiring certain elements to be included in the orders to act against illegal content. More broadly, the Piracy Shield is not aligned with the Charter of Fundamental Rights nor the Treaty on the Functioning of the EU -- as it hinders freedom of expression, freedom to provide internet services, the principle of proportionality, and the right to an effective remedy and a fair trial.

Far from taking these criticisms to heart, or acknowledging that Piracy Shield has failed to convert people to paying subscribers, the Italian government has decided to double down, and to make Piracy Shield even worse. Massimiliano Capitanio, Commissioner at AGCOM, the Italian Authority for Communications Guarantees, explained on LinkedIn how Piracy Shield was being extended in far-reaching ways (translation by Google Translate, original in Italian). [...] That is, Piracy Shield will apply to live content far beyond sports events, its original justification, and to streaming services. Even DNS and VPN providers will be required to block sites, a serious technical interference in the way the Internet operates, and a threat to people's privacy. Search engines, too, will be forced to de-index material. The only minor concession to ISPs is to unblock domain names and IP addresses that are no longer allegedly being used to disseminate unauthorized material. There are, of course, no concessions to ordinary Internet users affected by Piracy Shield blunders.
In the future, Italy's Piracy Shield will add:
- 30-minute blackout orders not only for pirate sports events, but also for other live content;
- the extension of blackout orders to VPNs and public DNS providers;
- the obligation for search engines to de-index pirate sites;
- the procedures for unblocking domain names and IP addresses obscured by Piracy Shield that are no longer used to spread pirate content;
- the new procedure to combat piracy on the #linear and "on demand" television, for example to protect the #film and #serietv.
AI

DeepMind Details All the Ways AGI Could Wreck the World (arstechnica.com) 36

An anonymous reader quotes a report from Ars Technica, written by Ryan Whitwam: Researchers at DeepMind have ... released a new technical paper (PDF) that explains how to develop AGI safely, which you can download at your convenience. It contains a huge amount of detail, clocking in at 108 pages before references. While some in the AI field believe AGI is a pipe dream, the authors of the DeepMind paper project that it could happen by 2030. With that in mind, they aimed to understand the risks of a human-like synthetic intelligence, which they acknowledge could lead to "severe harm." This work has identified four possible types of AGI risk, along with suggestions on how we might ameliorate said risks. The DeepMind team, led by company co-founder Shane Legg, categorized the negative AGI outcomes as misuse, misalignment, mistakes, and structural risks.

The first possible issue, misuse, is fundamentally similar to current AI risks. However, because AGI will be more powerful by definition, the damage it could do is much greater. A ne'er-do-well with access to AGI could misuse the system to do harm, for example, by asking the system to identify and exploit zero-day vulnerabilities or create a designer virus that could be used as a bioweapon. DeepMind says companies developing AGI will have to conduct extensive testing and create robust post-training safety protocols. Essentially, AI guardrails on steroids. They also suggest devising a method to suppress dangerous capabilities entirely, sometimes called "unlearning," but it's unclear if this is possible without substantially limiting models. Misalignment is largely not something we have to worry about with generative AI as it currently exists. This type of AGI harm is envisioned as a rogue machine that has shaken off the limits imposed by its designers. Terminators, anyone? More specifically, the AI takes actions it knows the developer did not intend. DeepMind says its standard for misalignment here is more advanced than simple deception or scheming as seen in the current literature.

To avoid that, DeepMind suggests developers use techniques like amplified oversight, in which two copies of an AI check each other's output, to create robust systems that aren't likely to go rogue. If that fails, DeepMind suggests intensive stress testing and monitoring to watch for any hint that an AI might be turning against us. Keeping AGIs in virtual sandboxes with strict security and direct human oversight could help mitigate issues arising from misalignment. Basically, make sure there's an "off" switch. If, on the other hand, an AI didn't know that its output would be harmful and the human operator didn't intend for it to be, that's a mistake. We get plenty of those with current AI systems -- remember when Google said to put glue on pizza? The "glue" for AGI could be much stickier, though. DeepMind notes that militaries may deploy AGI due to "competitive pressure," but such systems could make serious mistakes as they will be tasked with much more elaborate functions than today's AI. The paper doesn't have a great solution for mitigating mistakes. It boils down to not letting AGI get too powerful in the first place. DeepMind calls for deploying slowly and limiting AGI authority. The study also suggests passing AGI commands through a "shield" system that ensures they are safe before implementation.

Lastly, there are structural risks, which DeepMind defines as the unintended but real consequences of multi-agent systems contributing to our already complex human existence. For example, AGI could create false information that is so believable that we no longer know who or what to trust. The paper also raises the possibility that AGI could accumulate more and more control over economic and political systems, perhaps by devising heavy-handed tariff schemes. Then one day, we look up and realize the machines are in charge instead of us. This category of risk is also the hardest to guard against because it would depend on how people, infrastructure, and institutions operate in the future.

Media

AV1 is Supposed To Make Streaming Better, So Why Isn't Everyone Using It? (theverge.com) 46

Despite promises of more efficient streaming, the AV1 video codec hasn't achieved widespread adoption seven years after its 2018 debut, even with backing from tech giants Netflix, Microsoft, Google, Amazon, and Meta. The Alliance for Open Media (AOMedia) claims AV1 is 30% more efficient than standards like HEVC, delivering higher-quality video at lower bandwidth while remaining royalty-free.

Major services including YouTube, Netflix, and Amazon Prime Video have embraced the technology, with Netflix encoding approximately 95% of its content using AV1. However, adoption faces significant hurdles. Many streaming platforms including Max, Peacock, and Paramount Plus haven't implemented AV1, partly due to hardware limitations. Devices require specific decoders to properly support AV1, though recent products from Apple, Nvidia, AMD, and Intel have begun including them. "In order to get its best features, you have to accept a much higher encoding complexity," Larry Pearlstein, associate professor at the College of New Jersey, told The Verge. "But there is also higher decoding complexity, and that is on the consumer end."
China

Five VPN Apps In the App Store Had Links To Chinese Military (9to5mac.com) 29

A joint investigation found that at least five popular VPN apps on the App Store and Google Play have ties to Qihoo 360, a Chinese company with military links. Apple has since removed two of the apps but has not confirmed the status of the remaining three, which 9to5Mac notes have "racked up more than a million downloads." The five apps in question are Turbo VPN, VPN Proxy Master, Thunder VPN, Snap VPN, and Signal Secure VPN (not associated with the Signal messaging app). The Financial Times reports: At least five free virtual private networks (VPNs) available through the US tech groups' app stores have links to Shanghai-listed Qihoo 360, according to a new report by research group Tech Transparency Project, as well as additional findings by the Financial Times. Qihoo, formally known as 360 Security Technology, was sanctioned by the US in 2020 for alleged Chinese military links. The US Department of Defense later added Qihoo to a list of Chinese military-affiliated companies [...] In recent recruitment listings, Guangzhou Lianchuang says its apps operate in more than 220 countries and that it has 10mn daily users. It is currently hiring for a position whose responsibilities include "monitoring and analyzing platform data." The right candidate will be "well-versed in American culture," the posting says.
AI

AI Masters Minecraft: DeepMind Program Finds Diamonds Without Being Taught 9

An AI system has for the first time figured out how to collect diamonds in the hugely popular video game Minecraft -- a difficult task requiring multiple steps -- without being shown how to play. Its creators say the system, called Dreamer, is a step towards machines that can generalize knowledge learned in one domain to new situations, a major goal of AI. From a report: "Dreamer marks a significant step towards general AI systems," says Danijar Hafner, a computer scientist at Google DeepMind in San Francisco, California. "It allows AI to understand its physical environment and also to self-improve over time, without a human having to tell it exactly what to do." Hafner and his colleagues describe Dreamer in a study in Nature published on 2 April.

In Minecraft, players explore a virtual 3D world containing a variety of terrains, including forests, mountains, deserts and swamps. Players use the world's resources to create objects, such as chests, fences and swords -- and collect items, among the most prized of which are diamonds. Importantly, says Hafner, no two experiences are the same. Every time you play Minecraft, it's a new, randomly generated world," he says. This makes it useful for challenging an AI system that researchers want to be able to generalize from one situation to the next. "You have to really understand what's in front of you; you can't just memorize a specific strategy," he says.
AI

DeepMind is Holding Back Release of AI Research To Give Google an Edge (arstechnica.com) 31

Google's AI arm DeepMind has been holding back the release of its world-renowned research, as it seeks to retain a competitive edge in the race to dominate the burgeoning AI industry. From a report: The group, led by Nobel Prize-winner Sir Demis Hassabis, has introduced a tougher vetting process and more bureaucracy that made it harder to publish studies about its work on AI, according to seven current and former research scientists at Google DeepMind. Three former researchers said the group was most reluctant to share papers that reveal innovations that could be exploited by competitors, or cast Google's own Gemini AI model in a negative light compared with others.

The changes represent a significant shift for DeepMind, which has long prided itself on its reputation for releasing groundbreaking papers and as a home for the best scientists building AI. Meanwhile, huge breakthroughs by Google researchers -- such as its 2017 "transformers" paper that provided the architecture behind large language models -- played a central role in creating today's boom in generative AI. Since then, DeepMind has become a central part of its parent company's drive to cash in on the cutting-edge technology, as investors expressed concern that the Big Tech group had ceded its early lead to the likes of ChatGPT maker OpenAI.

"I cannot imagine us putting out the transformer papers for general use now," said one current researcher. Among the changes in the company's publication policies is a six-month embargo before "strategic" papers related to generative AI are released. Researchers also often need to convince several staff members of the merits of publication, said two people with knowledge of the matter.

Encryption

Gmail is Making It Easier For Businesses To Send Encrypted Emails To Anyone (theverge.com) 39

Google is rolling out a new encryption model for Gmail that allows enterprise users to send encrypted messages without requiring recipients to use custom software or exchange encryption certificates. The feature, launching in beta today, initially supports encrypted emails within the same organization, with plans to expand to all Gmail inboxes "in the coming weeks" and third-party email providers "later this year."

Unlike Gmail's current S/MIME-based encryption, the new system lets users simply toggle "additional encryption" in the email draft window. Non-Gmail recipients will receive a link to access messages through a guest Google Workspace account, while Gmail users will see automatically decrypted emails in their inbox.
AI

Anthropic Announces Updates On Security Safeguards For Its AI Models (cnbc.com) 39

Anthropic announced updates to the "responsible scaling" policy for its AI, including defining which of its model safety levels are powerful enough to need additional security safeguards. In an earlier version of its responsible scaling policy, Anthropic said it will start sweeping physical offices for hidden devices as part of a ramped-up security effort as the AI race intensifies. From a report: The company, backed by Amazon and Google, published safety and security updates in a blog post on Monday, and said it also plans to establish an executive risk council and build an in-house security team. Anthropic closed its latest funding round earlier this month at a $61.5 billion valuation, which makes it one of the highest-valued AI startups.

In addition to high-growth startups, tech giants including Google, Amazon and Microsoft are racing to announce new products and features. Competition is also coming from China, a risk that became more evident earlier this year when DeepSeek's AI model went viral in the U.S. Anthropic said in the post that it will introduce "physical" safety processes, such as technical surveillance countermeasures -- or the process of finding and identifying surveillance devices that are used to spy on organizations. The sweeps will be conducted "using advanced detection equipment and techniques" and will look for "intruders."
CNBC corrected that story to note that it had written about previous security safeguards Anthropic shared in October 2024. On Monday, Anthropic defined model capabilities that would require additional deployment and security safeguards beyond AI Safety Level (ASL) 3.
The Courts

Google To Pay $100 Million To Settle 14-Year-Old Advertising Lawsuit (msn.com) 6

An anonymous reader quotes a report from Reuters: Google has agreed to pay $100 million in cash to settle a long-running lawsuit claiming it overcharged advertisers by failing to provide promised discounts and charged for clicks on ads outside the geographic areas the advertisers targeted. A preliminary settlement of the 14-year-old class action, which began in March 2011, was filed late Thursday in the San Jose, California, federal court, and requires a judge's approval.

Advertisers who participated in Google's AdWords program, now known as Google Ads, accused the search engine operator of breaching its contract by manipulating its Smart Pricing formula to artificially reduce discounts. The advertisers also said Google, a unit of Mountain View, California-based Alphabet, misled them by failing to limit ad distribution to locations they designated, violating California's unfair competition law. Thursday's settlement covers advertisers who used AdWords between January 1, 2004, and December 13, 2012.

Google denied wrongdoing in agreeing to settle. "This case was about ad product features we changed over a decade ago and we're pleased it's resolved," spokesman Jose Castaneda said in an emailed statement. Lawyers for the plaintiffs may seek fees of up to 33% of the settlement fund, plus $4.2 million for expenses. According to court papers, the case took a long time as the parties produced extensive evidence, including more than 910,000 pages of documents and multiple terabytes of click data from Google, and participated in six mediation sessions before four different mediators.

Chrome

Honey Lost 4 Million Chrome Users After Shady Tactics Were Revealed (9to5google.com) 22

The Chrome extension Honey has lost over 4 million users after a viral video exposed it for hijacking affiliate codes and misleading users about finding the best coupon deals. 9to5Google reports: As we reported in early January, Honey had lost around 3 million users immediately after the video went viral, but ended up gaining back around 1 million later on. Now, as of March 2025, Honey is down to 16 million users on Chrome, down from its peak of 20 million.

This drop comes after new Chrome policy has taken effect which prevents Honey, and extensions like it, from practices including taking over affiliate codes without disclosure or without benefit to the extension's users. Honey has since updated its extension listing with disclosure, and we found that the behavior shown in the December video no longer occurs.

Programming

'No Longer Think You Should Learn To Code,' Says CEO of AI Coding Startup (x.com) 108

Learning to code has become sort of become pointless as AI increasingly dominates programming tasks, said Replit founder and chief executive Amjad Masad. "I no longer think you should learn to code," Masad wrote on X.

The statement comes as major tech executives report significant AI inroads into software development. Google CEO Sundar Pichai recently revealed that 25% of new code at the tech giant is AI-generated, though still reviewed by engineers. Furthermore, Anthropic CEO Dario Amodei predicted AI could generate up to 90% of all code within six months.

Masad called this shift a "bittersweet realization" after spending years popularizing coding through open-source work, Codecademy, and Replit -- a platform that now uses AI to help users build apps and websites. Instead of syntax-focused programming skills, Masad recommends learning "how to think, how to break down problems... how to communicate clearly, with humans and with machines."
Encryption

HTTPS Certificate Industry Adopts New Security Requirements (googleblog.com) 29

The Certification Authority/Browser Forum "is a cross-industry group that works together to develop minimum requirements for TLS certificates," writes Google's Security blog. And earlier this month two proposals from Google's forward-looking roadmap "became required practices in the CA/Browser Forum Baseline Requirements," improving the security and agility of TLS connections... Multi-Perspective Issuance Corroboration
Before issuing a certificate to a website, a Certification Authority (CA) must verify the requestor legitimately controls the domain whose name will be represented in the certificate. This process is referred to as "domain control validation" and there are several well-defined methods that can be used. For example, a CA can specify a random value to be placed on a website, and then perform a check to verify the value's presence has been published by the certificate requestor.

Despite the existing domain control validation requirements defined by the CA/Browser Forum, peer-reviewed research authored by the Center for Information Technology Policy of Princeton University and others highlighted the risk of Border Gateway Protocol (BGP) attacks and prefix-hijacking resulting in fraudulently issued certificates. This risk was not merely theoretical, as it was demonstrated that attackers successfully exploited this vulnerability on numerous occasions, with just one of these attacks resulting in approximately $2 million dollars of direct losses.

The Chrome Root Program led a work team of ecosystem participants, which culminated in a CA/Browser Forum Ballot to require adoption of MPIC via Ballot SC-067. The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on MPIC as part of their certificate issuance process. Some of these CAs are relying on the Open MPIC Project to ensure their implementations are robust and consistent with ecosystem expectations...

Linting
Linting refers to the automated process of analyzing X.509 certificates to detect and prevent errors, inconsistencies, and non-compliance with requirements and industry standards. Linting ensures certificates are well-formatted and include the necessary data for their intended use, such as website authentication. Linting can expose the use of weak or obsolete cryptographic algorithms and other known insecure practices, improving overall security... The ballot received unanimous support from organizations who participated in voting. Beginning March 15, 2025, CAs issuing publicly-trusted certificates must now rely on linting as part of their certificate issuance process.

Linting also improves interoperability, according to the blog post, and helps reduce the risk of non-compliance with standards that can result in certificates being "mis-issued".

And coming up, weak domain control validation methods (currently permitted by the CA/Browser Forum TLS Baseline Requirements) will be prohibited beginning July 15, 2025.

"Looking forward, we're excited to explore a reimagined Web PKI and Chrome Root Program with even stronger security assurances for the web as we navigate the transition to post-quantum cryptography."
AI

Copilot Can't Beat a 2013 'TouchDevelop' Code Generation Demo for Windows Phone 18

What happens when you ask Copilot to "write a program that can be run on an iPhone 16 to select 15 random photos from the phone, tint them to random colors, and display the photos on the phone"?

That's what TouchDevelop did for the long-discontinued Windows Phone in a 2013 Microsoft Research 'SmartSynth' natural language code generation demo. ("Write scripts by tapping on the screen.")

Long-time Slashdot reader theodp reports on what happens when, 14 years later, you pose the same question to Copilot: "You'll get lots of code and caveats from Copilot, but nothing that you can execute as is. (Compare that to the functioning 10 lines of code TouchDevelop program). It's a good reminder that just because GenAI can generate code, it doesn't necessarily mean it will generate the least amount of code, the most understandable or appropriate code for the requestor, or code that runs unchanged and produces the desired results.
theodp also reminds us that TouchDevelop "was (like BASIC) abandoned by Microsoft..." Interestingly, a Microsoft Research video from CS Education Week 2011 shows enthusiastic Washington high school students participating in an hour-long TouchDevelop coding lesson and demonstrating the apps they created that tapped into music, photos, the Internet, and yes, even their phone's functionality. This shows how lacking iPhone and Android still are today as far as easy programmability-for-the-masses goes. (When asked, Copilot replied that Apple's Shortcuts app wasn't up to the task).
Google

Google Sunsets Two Devices From Its Nest Smart Home Product Line (pcworld.com) 16

"After a long run, Google is sunsetting two of its signature Nest products," reports PC World: Google has just announced that it's discontinuing the 10-year-old Nest Protect and the 7-year-old Nest x Yale lock. Both of those products will continue to work, and — for now — they remain on sale at the Google Store, complete with discounts until supplies run out. But while Google itself is exiting the smoke alarm and smart lock business, it isn't leaving Google Home users in the lurch. Instead, it's teeing up third-party replacements for the Nest Protect and Nest X Yale lock, with both new products coming from familiar brands... Capable of being unlocked via app, entry code, or a traditional key, the Yale Smart Lock with Matter is set to arrive this summer, according to Yale.

While both the existing Nest Protect and Nest x Yale lock will continue to operate and receive security patches, those who purchased the second-generation Nest Protect near its 2015 launch date should probably replace the product anyway. That's because the CO sensors in carbon monoxide detectors like the Nest Protect have a roughly 10-year life expectancy.

Nest Protect and the Nest X Yale lock were two of the oldest products in Google's smart home lineup, and both were showing their age.

Businesses

Reddit's 50% Stock-Price Plunge Fails to Entice Buyers as Growth Slows (yahoo.com) 38

Though it's stock price is still up 200% from its IPO in March of 2024 — last week Reddit's stock had dropped nearly 50% since February 7th.

And then this week, it dropped another 10%, reports Bloomberg, citing both the phenomenon of "volatile technology stocks under pressure" — but also specifically "the gloomy sentiment around Reddit..." The social media platform has struggled to recover since an earnings report in February showed that it is failing to keep up with larger digital advertising peers such as Meta Platforms Inc. and Alphabet Inc.'s Google, which have higher user figures. Reddit's outlook seemed precarious because its U.S. traffic took a hit from a change in Google's search algorithm.

In recent weeks, the short interest in Reddit — a proxy for the volume of bets against the company — has ticked up, and forecasts for the company's share price have fallen. One analyst opened coverage of Reddit this month with a recommendation that investors sell the shares, in part due to the company's heavy reliance on Google. Reddit shares fell more than 5% in intraday trading Friday. "It's been super overvalued," Bob Lang, founder and chief options analyst at Explosive Options said of Reddit. "Their growth rate is very strong, but they still are not making any money." Reddit had a GAAP earnings per share loss of $3.33 in 2024, but reported two consecutive quarters of positive GAAP EPS in the second half of the year...

At its February peak, Reddit's stock had risen over 500% from the $34 initial public offering price last March. Some of the enthusiasm was due to a series of deals in which Reddit was paid to allow its content to be used for training artificial intelligence models. More recently, though, there have been questions about the long-term growth prospects for the artificial intelligence industry.

"On Wall Street, the average price target from analysts has fallen to about $195 from $207 a month ago," the article points out. "That still offers a roughly $85 upside from where shares closed following Thursday's 8% slump..."

Meanwhile Reuters reported that more than 33,000 U.S. Reddit users experienced disruptions on Thursday according to Downdetector.com. "A Reddit spokesperson said the outage was due to a bug in a recent update, which has now been fixed."
Google

Google Maps Can Soon Scan Your Screenshots To Plan Your Vacation (theverge.com) 15

Google is rolling out new AI-powered features across Maps, Search, and Hotels to simplify travel planning, including a screenshot-detection tool in Maps that identifies and saves locations mentioned in image text. The Verge reports: Once the new screenshot list is enabled in Maps, the Gemini-powered feature will detect places that are mentioned in text within screenshots on the device, show users the locations on the map, and allow them to review and save locations to a sharable list. The screenshot list feature will start rolling out in English this week to iOS users in the US, with Android support "coming soon."

AI Overviews for Google Search are also being updated to expand travel planning tools, with itinerary-building features rolling out in English to mobile and desktop devices in the US this week that can create trip ideas for "distinct regions or entire countries." Users can use terms like "create a vacation itinerary for Greece that focuses on history" to explore reviews and photos from other users alongside a map of location recommendations, which can be saved to Google Maps or exported to Docs or Gmail.

AI

OpenAI's Viral Studio Ghibli Moment Highlights AI Copyright Concerns (techcrunch.com) 121

An anonymous reader quotes a report from TechCrunch: It's only been a day since ChatGPT's new AI image generator went live, and social media feeds are already flooded with AI-generated memes in the style of Studio Ghibli, the cult-favorite Japanese animation studio behind blockbuster films such as "My Neighbor Totoro" and "Spirited Away." In the last 24 hours, we've seen AI-generated images representing Studio Ghibli versions of Elon Musk, "The Lord of the Rings", and President Donald Trump. OpenAI CEO Sam Altman even seems to have made his new profile picture a Studio Ghibli-style image, presumably made with GPT-4o's native image generator. Users seem to be uploading existing images and pictures into ChatGPT and asking the chatbot to re-create it in new styles.

OpenAI's latest update comes on the heels of Google's release of a similar AI image feature in its Gemini Flash model, which also sparked a viral moment earlier in March when people used it to remove watermarks from images. OpenAI's and Google's latest tools make it easier than ever to re-create the styles of copyrighted works -- simply by typing a text prompt. Together, these new AI image features seem to reignite concerns at the core of several lawsuits against generative AI model developers. If these companies are training on copyrighted works, are they violating copyright law?

According to Evan Brown, an intellectual property lawyer at the law firm Neal & McDevitt, products like GPT-4o's native image generator operate in a legal gray area today. Style is not explicitly protected by copyright, according to Brown, meaning OpenAI does not appear to be breaking the law simply by generating images that look like Studio Ghibli movies. However, Brown says it's plausible that OpenAI achieved this likeness by training its model on millions of frames from Ghibli's films. Even if that was the case, several courts are still deciding whether training AI models on copyrighted works falls under fair use protections. "I think this raises the same question that we've been asking ourselves for a couple years now," said Brown in an interview. "What are the copyright infringement implications of going out, crawling the web, and copying into these databases?"

Android

Google Will Develop the Android OS Fully In Private 20

An anonymous reader quotes a report from Android Authority: No matter the manufacturer, every Android phone has one thing in common: its software base. Manufacturers can heavily customize the look and feel of the Android OS they ship on their Android devices, but under the hood, the core system functionality is derived from the same open-source foundation: the Android Open Source Project. After over 16 years, Google is making big changes to how it develops the open source version of Android in an effort to streamline its development. [...] Beginning next week, all Android development will occur within Google's internal branches, and the source code for changes will only be released when Google publishes a new branch containing those changes. As this is already the practice for most Android component changes, Google is simply consolidating its development efforts into a single branch.

This change will have minimal impact on regular users. While it streamlines Android OS development for Google, potentially affecting the speed of new version development and bug reduction, the overall effect will likely be imperceptible. Therefore, don't expect this change to accelerate OS updates for your phone. This change will also have minimal impact on most developers. App developers are unaffected, as it pertains only to platform development. Platform developers, including those who build custom ROMs, will largely also see little change, since they typically base their work on specific tags or release branches, not the main AOSP branch. Similarly, companies that release forked AOSP products rarely use the main AOSP branch due to its inherent instability.

External developers who enjoy reading or contributing to AOSP will likely be dismayed by this news, as it reduces their insight into Google's development efforts. Without a GMS license, contributing to Android OS development becomes more challenging, as the available code will consistently lag behind by weeks or months. This news will also make it more challenging for some developers to keep up with new Android platform changes, as they'll no longer be able to track changes in AOSP. For reporters, this change means less access to potentially revealing information, as AOSP patches often provide insights into Google's development plans. [...] Google will share more details about this change when it announces it later this week. If you're interested in learning more, be sure to keep an eye out for the announcement and new documentation on source.android.com.
Android Authority's Mishaal Rahman says Google is "committed to publishing Android's source code, so this change doesn't mean that Android is becoming closed-source."

"What will change is the frequency of public source code releases for specific Android components," says Rahman. "Some components like the build system, update engine, Bluetooth stack, Virtualization framework, and SELinux configuration are currently AOSP-first, meaning they're developed fully in public. Most Android components like the core OS framework are primarily developed internally, although some features, such as the unlocked-only storage area API, are still developed within AOSP."
Microsoft

Microsoft Abandons Data Center Projects, TD Cowen Says (bloomberg.com) 25

Microsoft has walked away from new data center projects in the US and Europe that would have amounted to a capacity of about 2 gigawatts of electricity, according to TD Cowen analysts, who attributed the pullback to an oversupply of the clusters of computers that power artificial intelligence. From a report: The analysts, who rattled investors with a February note highlighting leases Microsoft had abandoned in the US, said the latest move also reflected the company's choice to forgo some new business from ChatGPT maker OpenAI, which it has backed with some $13 billion. Microsoft and the startup earlier this year said they had altered their multiyear agreement, letting OpenAI use cloud-computing services from other companies, provided Microsoft didn't want the business itself.

Microsoft's retrenchment in the last six months included lease cancellations and deferrals, the TD Cowen analysts said in their latest research note, dated Wednesday. Alphabet's Google had stepped in to grab some leases Microsoft abandoned in Europe, the analysts wrote, while Meta Platforms had scooped up some of the freed capacity in Europe.

Slashdot Top Deals