Games

EA's Origin App For PC Gaming Will Shut Down In April 17

EA's Origin PC client will be shut down on April 17, 2025, as Microsoft ends support for 32-bit software. "Anyone still using Origin will need to swap over to the EA app before that date," adds Engadget. From the report: For those PC players who have not migrated over to the EA app, the company has an FAQ explaining the latest system requirements. The EA app runs on 64-bit architecture, and requires a machine using Windows 10 or Windows 11. [...] If you're simply downloading the EA app on a current machine, players won't need to re-download their games. And if you have cloud saves enabled, all of your data should transfer without any additional steps.

However, it's always a good idea to have physical backups with this type of transition, especially since not all games support cloud saves, and those titles will need to have saved game data manually transferred. Mods also may not automatically make the switch, and EA recommends players check with mod creators about transferring to the EA app.
AI

AI Boom Gives Rise To 'GPU-as-a-Service' 35

An anonymous reader quotes a report from IEEE Spectrum: The surge of interest in AI is creating a massive demand for computing power. Around the world, companies are trying to keep up with the vast amount of GPUs needed to power more and more advanced AI models. While GPUs are not the only option for running an AI model, they have become the hardware of choice due to their ability to efficiently handle multiple operations simultaneously -- a critical feature when developing deep learning models. But not every AI startup has the capital to invest in the huge numbers of GPUs now required to run a cutting-edge model. For some, it's a better deal to outsource it. This has led to the rise of a new business: GPU-as-a-Service (GPUaaS). In recent years, companies like Hyperbolic, Kinesis, Runpod, and Vast.ai have sprouted up to remotely offer their clients the needed processing power.

[...] Studies have shown that more than half of the existing GPUs are not in use at any given time. Whether we're talking personal computers or colossal server farms, a lot of processing capacity is under-utilized. What Kinesis does is identify idle compute -- both for GPUs and CPUs -- in servers worldwide and compile them into a single computing source for companies to use. Kinesis partners with universities, data centers, companies, and individuals who are willing to sell their unused computing power. Through a special software installed on their servers, Kinesis detects idle processing units, preps them, and offers them to their clients for temporary use. [...] The biggest advantage of GPUaaS is economical. By removing the need to purchase and maintain the physical infrastructure, it allows companies to avoid investing in servers and IT management, and to instead put their resources toward improving their own deep learning, large language, and large vision models. It also lets customers pay for the exact amount of GPUs they use, saving the costs of the inevitable idle compute that would come with their own servers.
The report notes that GPUaaS is growing in profitability. "In 2023, the industry's market size was valued at US $3.23 billion; in 2024, it grew to $4.31 billion," reports IEEE. "It's expected to rise to $49.84 billion by 2032."
Security

Employees of Failed Startups Are at Special Risk of Stolen Personal Data Through Old Google Logins (techcrunch.com) 7

Hackers could steal sensitive personal data from former startup employees by exploiting abandoned company domains and Google login systems, security researcher Dylan Ayrey revealed at ShmooCon conference. The vulnerability particularly affects startups that relied on "Sign in with Google" features for their business software.

Ayrey, CEO of Truffle Security, demonstrated the flaw by purchasing one failed startup's domain and accessing ChatGPT, Slack, Notion, Zoom and an HR system containing Social Security numbers. His research found 116,000 website domains from failed tech startups currently available for sale. While Google offers preventive measures through its OAuth "sub-identifier" system, some providers avoid it due to reliability concerns - which Google disputes. The company initially dismissed Ayrey's finding as a fraud issue before reversing course and awarding him a $1,337 bounty. Google has since updated its documentation but hasn't implemented a technical fix, TechCrunch reports.
United States

The Pentagon Says AI is Speeding Up Its 'Kill Chain' 34

An anonymous reader shares a report: Leading AI developers, such as OpenAI and Anthropic, are threading a delicate needle to sell software to the United States military: make the Pentagon more efficient, without letting their AI kill people. Today, their tools are not being used as weapons, but AI is giving the Department of Defense a "significant advantage" in identifying, tracking, and assessing threats, the Pentagon's Chief Digital and AI Officer, Dr. Radha Plumb, told TechCrunch in a phone interview.

"We obviously are increasing the ways in which we can speed up the execution of kill chain so that our commanders can respond in the right time to protect our forces," said Plumb. The "kill chain" refers to the military's process of identifying, tracking, and eliminating threats, involving a complex system of sensors, platforms, and weapons. Generative AI is proving helpful during the planning and strategizing phases of the kill chain, according to Plumb. The relationship between the Pentagon and AI developers is a relatively new one. OpenAI, Anthropic, and Meta walked back their usage policies in 2024 to let U.S. intelligence and defense agencies use their AI systems. However, they still don't allow their AI to harm humans. "We've been really clear on what we will and won't use their technologies for," Plumb said, when asked how the Pentagon works with AI model providers.
Linux

Linux 6.13 Released (phoronix.com) 25

"Nothing horrible or unexpected happened last week," Linux Torvalds posted tonight on the Linux kernel mailing list, "so I've tagged and pushed out the final 6.13 release."

Phoronix says the release has "plenty of fine features": Linux 6.13 comes with the introduction of the AMD 3D V-Cache Optimizer driver for benefiting multi-CCD Ryzen X3D processors. The new AMD EPYC 9005 "Turin" server processors will now default to AMD P-State rather than ACPI CPUFreq for better power efficiency....

Linux 6.13 also brings more Rust programming language infrastructure and more.

Phoronix notes that Linux 6.13 also brings "the start of Intel Xe3 graphics bring-up, support for many older (pre-M1) Apple devices like numerous iPads and iPhones, NVMe 2.1 specification support, and AutoFDO and Propeller optimization support when compiling the Linux kernel with the LLVM Clang compiler."

And some lucky Linux kernel developers will also be getting a guitar pedal soldered by Linus Torvalds himself, thanks to a generous offer he announced a week ago: For _me_ a traditional holiday activity tends to be a LEGO build or two, since that's often part of the presents... But in addition to the LEGO builds, this year I also ended up doing a number of guitar pedal kit builds ("LEGO for grown-ups with a soldering iron"). Not because I play guitar, but because I enjoy the tinkering, and the guitar pedals actually do something and are the right kind of "not very complex, but not some 5-minute 555 LED blinking thing"...

[S]ince I don't actually have any _use_ for the resulting pedals (I've already foisted off a few only unsuspecting victims^Hfriends), I decided that I'm going to see if some hapless kernel developer would want one.... as an admittedly pretty weak excuse to keep buying and building kits...

"It may be worth noting that while I've had good success so far, I'm a software person with a soldering iron. You have been warned... [Y]ou should set your expectations along the lines of 'quality kit built by a SW person who doesn't know one end of a guitar from the other.'"
Google

Google Upgrades Open Source Vulnerability Scanning Tool with SCA Scanning Library (googleblog.com) 2

In 2022 Google released a tool to easily scan for vulnerabilities in dependencies named OSV-Scanner. "Together with the open source community, we've continued to build this tool, adding remediation features," according to Google's security blog, "as well as expanding ecosystem support to 11 programming languages and 20 package manager formats... Users looking for an out-of-the-box vulnerability scanning CLI tool should check out OSV-Scanner, which already provides comprehensive language package scanning capabilities..."

Thursday they also announced an extensible library for "software composition analysis" scanning (as well as file-system scanning) named OSV-SCALIBR (Open Source Vulnerability — Software Composition Analysis LIBRary). The new library "combines Google's internal vulnerability management expertise into one scanning library with significant new capabilities such as:
  • Software composition analysis for installed packages, standalone binaries, as well as source code
  • OSes package scanning on Linux (COS, Debian, Ubuntu, RHEL, and much more), Windows, and Mac
  • Artifact and lockfile scanning in major language ecosystems (Go, Java, Javascript, Python, Ruby, and much more)
  • Vulnerability scanning tools such as weak credential detectors for Linux, Windows, and Mac
  • Software Bill of Materials (SBOM) generation in SPDX and CycloneDX, the two most popular document formats
  • Optimization for on-host scanning of resource constrained environments where performance and low resource consumption is critical

"OSV-SCALIBR is now the primary software composition analysis engine used within Google for live hosts, code repos, and containers. It's been used and tested extensively across many different products and internal tools to help generate SBOMs, find vulnerabilities, and help protect our users' data at Google scale. We offer OSV-SCALIBR primarily as an open source Go library today, and we're working on adding its new capabilities into OSV-Scanner as the primary CLI interface."


Printer

Proposed New York Law Could Require Background Checks Before Buying 3D Printers (news10.com) 225

A new law is being considered by New York's state legislature, reports a local news outlet, which "if passed, will require anyone buying a 3D printer to pass a background check. If you can't legally own a firearm, you won't be able to buy one of these printers..." It is illegal to print most gun parts in New York. Attorney Greg Rinckey believes the proposal is an overreach. "I think this is also gonna face some constitutional problems. I mean, it really comes down to a legal parsing of what are you printing and at what point is it technically a firearm...?"

[Ascent Fabrication owner Joe] Fairley thinks lawmakers should shift their focus on those partial gun kits that produce the metal firing components. Another possibility is to require printer manufacturers to install software that prevents gun parts from being printed. "They would need to agree on some algorithm to look at the part and say nope, that is a gun component, you're not allowed to print that part somehow," said Fairley. "But I feel like it would be extremely difficult to get to that point."

AI

Arrested by AI: When Police Ignored Standards After AI Facial-Recognition Matches (msn.com) 55

A county transit police detective fed a poor-quality image to an AI-powered facial recognition program, remembers the Washington Post, leading to the arrest of "Christopher Gatlin, a 29-year-old father of four who had no apparent ties to the crime scene nor a history of violent offenses." He was unable to post the $75,000 cash bond required, and "jailed for a crime he says he didn't commit, it would take Gatlin more than two years to clear his name." A Washington Post investigation into police use of facial recognition software found that law enforcement agencies across the nation are using the artificial intelligence tools in a way they were never intended to be used: as a shortcut to finding and arresting suspects without other evidence... The Post reviewed documents from 23 police departments where detailed records about facial recognition use are available and found that 15 departments spanning 12 states arrested suspects identified through AI matches without any independent evidence connecting them to the crime — in most cases contradicting their own internal policies requiring officers to corroborate all leads found through AI. Some law enforcement officers using the technology appeared to abandon traditional policing standards and treat software suggestions as facts, The Post found. One police report referred to an uncorroborated AI result as a "100% match." Another said police used the software to "immediately and unquestionably" identify a suspected thief.

Gatlin is one of at least eight people wrongfully arrested in the United States after being identified through facial recognition... All of the cases were eventually dismissed. Police probably could have eliminated most of the people as suspects before their arrest through basic police work, such as checking alibis, comparing tattoos, or, in one case, following DNA and fingerprint evidence left at the scene.

Some statistics from the article about the eight wrongfully-arrested people:
  • In six cases police failed to check alibis
  • In two cases police ignored evidence that contradicted their theory
  • In five cases police failed to collect key pieces of evidence
  • In three cases police ignored suspects' physical characteristics
  • In six cases police relied on problematic witness statements

The article provides two examples of police departments forced to pay $300,000 settlements after wrongful arrests caused by AI mismatches. But "In interviews with The Post, all eight people known to have been wrongly arrested said the experience had left permanent scars: lost jobs, damaged relationships, missed payments on car and home loans. Some said they had to send their children to counseling to work through the trauma of watching their mother or father get arrested on the front lawn.

"Most said they also developed a fear of police."


AI

World's First AI Chatbot, ELIZA, Resurrected After 60 Years (livescience.com) 37

"Scientists have just resurrected 'ELIZA,' the world's first chatbot, from long-lost computer code," reports LiveScience, "and it still works extremely well." (Click in the vintage black-and-green rectangle for a blinking-cursor prompt...) Using dusty printouts from MIT archives, these "software archaeologists" discovered defunct code that had been lost for 60 years and brought it back to life. ELIZA was developed in the 1960s by MIT professor Joseph Weizenbaum and named for Eliza Doolittle, the protagonist of the play "Pygmalion," who was taught how to speak like an aristocratic British woman.

As a language model that the user could interact with, ELIZA had a significant impact on today's artificial intelligence (AI), the researchers wrote in a paper posted to the preprint database arXiv Sunday (Jan. 12). The "DOCTOR" script written for ELIZA was programmed to respond to questions as a psychotherapist would. For example, ELIZA would say, "Please tell me your problem." If the user input "Men are all alike," the program would respond, "In what way."

Weizenbaum wrote ELIZA in a now-defunct programming language he invented, called Michigan Algorithm Decoder Symmetric List Processor (MAD-SLIP), but it was almost immediately copied into the language Lisp. With the advent of the early internet, the Lisp version of ELIZA went viral, and the original version became obsolete. Experts thought the original 420-line ELIZA code was lost until 2021, when study co-author Jeff Shrager, a cognitive scientist at Stanford University, and Myles Crowley, an MIT archivist, found it among Weizenbaum's papers. "I have a particular interest in how early AI pioneers thought," Shrager told Live Science in an email. "Having computer scientists' code is as close to having a record of their thoughts, and as ELIZA was — and remains, for better or for worse — a touchstone of early AI, I want to know what was in his mind...."

Even though it was intended to be a research platform for human-computer communication, "ELIZA was such a novelty at the time that its 'chatbotness' overwhelmed its research purposes," Shrager said.

I just remember that time 23 years ago when someone connected a Perl version of ELIZA to "an AOL Instant Messenger account that has a high rate of 'random' people trying to start conversations" to "put ELIZA in touch with the real world..."

Thanks to long-time Slashdot reader MattSparkes for sharing the news.
AI

Google Reports Halving Code Migration Time With AI Help 12

Google computer scientists have been using LLMs to streamline internal code migrations, achieving significant time savings of up to 89% in some cases. The findings appear in a pre-print paper titled "How is Google using AI for internal code migrations?" The Register reports: Their focus is on bespoke AI tools developed for specific product areas, such as Ads, Search, Workspace and YouTube, instead of generic AI tools that provide broadly applicable services like code completion, code review, and question answering. Google's code migrations involved: changing 32-bit IDs in the 500-plus-million-line codebase for Google Ads to 64-bit IDs; converting its old JUnit3 testing library to JUnit4; and replacing the Joda time library with Java's standard java.time package. The int32 to int64 migration, the Googlers explain, was not trivial as the IDs were often generically defined (int32_t in C++ or Integer in Java) and were not easily searchable. They existed in tens of thousands of code locations across thousands of files. Changes had to be tracked across multiple teams and changes to class interfaces had to be considered across multiple files. "The full effort, if done manually, was expected to require hundreds of software engineering years and complex crossteam coordination," the authors explain.

For their LLM-based workflow, Google's software engineers implemented the following process. An engineer from Ads would identify an ID in need of migration using a combination of code search, Kythe, and custom scripts. Then an LLM-based migration toolkit, triggered by someone knowledgeable in the art, was run to generate verified changes containing code that passed unit tests. Those changes would be manually checked by the same engineer and potentially corrected. Thereafter, the code changes would be sent to multiple reviewers who are responsible for the portion of the codebase affected by the changes. The result was that 80 percent of the code modifications in the change lists (CLs) were purely the product of AI; the remainder were either human-authored or human-edited AI suggestions.

"We discovered that in most cases, the human needed to revert at least some changes the model made that were either incorrect or not necessary," the authors observe. "Given the complexity and sensitive nature of the modified code, effort has to be spent in carefully rolling out each change to users." Based on this, Google undertook further work on LLM-driven verification to reduce the need for detailed review. Even with the need to double-check the LLM's work, the authors estimate that the time required to complete the migration was reduced by 50 percent. With LLM assistance, it took just three months to migrate 5,359 files and modify 149,000 lines of code to complete the JUnit3-JUnit4 transition. Approximately 87 percent of the code generated by AI ended up being committed with no changes. As for the Joda-Java time framework switch, the authors estimate a time saving of 89 percent compared to the projected manual change time, though no specifics were provided to support that assertion.
AI

AI Tools Crack Down on Wall Street Trader Code Speak (msn.com) 21

Compliance software firms are deploying AI to decode complex trader communications and detect potential financial crimes as Wall Street and London regulators intensify scrutiny of market manipulation.

Companies like Behavox and Global Relay are developing AI tools that can interpret trader slang, emoji-laden messages and even coded language that traditional detection systems might miss, WSJ reports. The technology aims to replace older methods that relied on scanning for specific trigger words, which traders could easily evade. The story adds: Traders believed that "if somebody wanted to say something sketchy, they would just make up a funny word or, you know, spell it backward or something," [Donald] McElligott (VP of Global Relay) said. "Now, none of that"s going to work anymore."
Transportation

Toyota Unit Hino Motors Reaches $1.6 Billion US Diesel Emissions Settlement (msn.com) 8

An anonymous reader quotes a report from Reuters: Toyota Motor unit Hino Motors has agreed a $1.6 billion settlement with U.S. agencies and will plead guilty over excess diesel engine emissions in more than 105,000 U.S. vehicles, the company and U.S. government said on Wednesday. The Japanese truck and engine manufacturer was charged with fraud in U.S. District Court in Detroit for unlawfully selling 105,000 heavy-duty diesel engines in the United States from 2010 through 2022 that did not meet emissions standards. The settlement, which still must be approved by a U.S. judge, includes a criminal penalty of $521.76 million, $442.5 million in civil penalties to U.S. authorities and $236.5 million to California.

A company-commissioned panel said in a report in 2022 Hino had falsified emissions data on some engines going back to at least 2003. Hino agreed to plead guilty to engaging in a multi-year criminal conspiracy and serve a five-year term of probation, during which it will be barred from importing any diesel engines it has manufactured into the U.S., and carry out a comprehensive compliance and ethics program, the Justice Department and Environmental Protection Agency said. [...] The settlement includes a mitigation program, valued at $155 million, to offset excess air emissions from the violations by replacing marine and locomotive engines, and a recall program, valued at $144.2 million, to fix engines in 2017-2019 heavy-duty trucks

The EPA said Hino admitted that between 2010 and 2019, it submitted false applications for engine certification approvals and altered emission test data, conducted tests improperly and fabricated data without conducting any underlying tests. Hino President Satoshi Ogiso said the company had improved its internal culture, oversight and compliance practices. "This resolution is a significant milestone toward resolving legacy issues that we have worked hard to ensure are no longer a part of Hino's operations or culture," he said in a statement.
Toyota's Hino Motors isn't the only automaker to admit to selling vehicles with excess diesel emissions. Volkswagen had to pay billions in fines after it admitted in 2015 to cheating emissions tests by installing "defeat devices" and sophisticated software in nearly 11 million vehicles worldwide. Daimler (Mercedes-Benz), BMW, Opel/Vauxhall (General Motors), and Fiat Chrysler have been implicated in similar practices.
AI

Apple Pulls AI-Generated Notifications For News After Generating Fake Headlines 20

An anonymous reader quotes a report from CNN: Apple is temporarily pulling its newly introduced artificial intelligence feature that summarizes news notifications after it repeatedly sent users error-filled headlines, sparking backlash from a news organization and press freedom groups. The rare reversal from the iPhone maker on its heavily marketed Apple Intelligence feature comes after the technology produced misleading or altogether false summaries of news headlines that appear almost identical to regular push notifications.

On Thursday, Apple deployed a beta software update to developers that disabled the AI feature for news and entertainment headlines, which it plans to later roll out to all users while it works to improve the AI feature. The company plans to re-enable the feature in a future update. As part of the update, the company said the Apple Intelligence summaries, which users must opt into, will more explicitly emphasize that the information has been produced by AI, signaling that it may sometimes produce inaccurate results.
AI

AI Slashes Google's Code Migration Time By Half (theregister.com) 74

Google has cut code migration time in half by deploying AI tools to assist with large-scale software updates, according to a new research paper from the company's engineers. The tech giant used large language models to help convert 32-bit IDs to 64-bit across its 500-million-line codebase, upgrade testing libraries, and replace time-handling frameworks. While 80% of code changes were AI-generated, human engineers still needed to verify and sometimes correct the AI's output. In one project, the system helped migrate 5,359 files and modify 149,000 lines of code in three months.
Microsoft

Microsoft Patches Windows To Eliminate Secure Boot Bypass Threat (arstechnica.com) 39

Microsoft has patched a Windows vulnerability that allowed attackers to bypass Secure Boot, a critical defense against firmware infections, the company said. The flaw, tracked as CVE-2024-7344, affected Windows devices for at least seven months. Security researcher Martin Smolar discovered the vulnerability in a signed UEFI application within system recovery software from seven vendors, including Howyar.

The application, reloader.efi, circumvented standard security checks through a custom PE loader. Administrative attackers could exploit the vulnerability to install malicious firmware that persists even after disk reformatting. Microsoft revoked the application's digital signature, though the vulnerability's impact on Linux systems remains unclear.
United States

A New Jam-Packed Biden Executive Order Tackles Cybersecurity, AI, and More (wired.com) 127

U.S. President Joe Biden has issued a comprehensive cybersecurity executive order, four days before leaving office, mandating improvements to government network monitoring, software procurement, AI usage, and foreign hacker penalties.

The 40-page directive aims to leverage AI's security benefits, implement digital identities for citizens, and address vulnerabilities that have allowed Chinese and Russian intrusions into U.S. government systems. It requires software vendors to prove secure development practices and gives the Commerce Department eight months to establish mandatory cybersecurity standards for government contractors.
Programming

Replit CEO on AI Breakthroughs: 'We Don't Care About Professional Coders Anymore' (semafor.com) 168

Replit, an AI coding startup platform, has made a dramatic pivot away from professional programmers in a fundamental shift in how software may be created in the future. "We don't care about professional coders anymore," CEO Amjad Masad told Semafor, as the company refocuses on helping non-developers build software using AI.

The strategic shift follows the September launch of Replit's "Agent" tool, which can create working applications from simple text commands. The tool, powered by Anthropic's Claude 3.5 Sonnet AI model, has driven a five-fold revenue increase in six months. The move marks a significant departure for Replit, which built its business providing online coding tools for software developers. The company is now betting that AI will make traditional programming skills less crucial, allowing non-technical users to create software through natural language instructions.
Government

Governments Call For Spyware Regulations In UN Security Council Meeting (techcrunch.com) 13

An anonymous reader quotes a report from TechCrunch: On Tuesday, the United Nations Security Council held a meeting to discuss the dangers of commercial spyware, which marks the first time this type of software -- also known as government or mercenary spyware -- has been discussed at the Security Council. The goal of the meeting, according to the U.S. Mission to the UN, was to "address the implications of the proliferation and misuse of commercial spyware for the maintenance of international peace and security." The United States and 15 other countries called for the meeting. While the meeting was mostly informal and didn't end with any concrete proposals, most of the countries involved, including France, South Korea, and the United Kingdom, agreed that governments should take action to control the proliferation and abuse of commercial spyware. Russia and China, on the other hand, dismissed the concerns.

John Scott-Railton, a senior researcher at The Citizen Lab, a human rights organization that has investigated spyware abuses since 2012, gave testimony in which he sounded the alarm on the proliferation of spyware made by "a secretive global ecosystem of developers, brokers, middlemen, and boutique firms," which "is threatening international peace and security as well as human rights." Scott-Railton called Europe "an epicenter of spyware abuses" and a fertile ground for spyware companies, referencing a recent TechCrunch investigation that showed Barcelona has become a hub for spyware companies in the last few years.

Representatives of Poland and Greece, countries that had their own spyware scandals involving software made by NSO Group and Intellexa, respectively, also intervened. Poland's representative pointed at local legislative efforts to put "more control, including by the judiciary, on the relevant operational activities of the security and intelligence services," while also recognizing that spyware can be used in a legal way. "We are not saying that the use of spyware is never justified or even required," said Poland's representative. And the Greek representative pointed to the country's 2022 bill to ban the sale of spyware.

Biotech

Startup Raises $200 Million To 'De-Extinct' the Woolly Mammoth, Thylacine and Dodo (venturebeat.com) 123

An anonymous reader quotes a report from VentureBeat: Colossal BioSciences has raised $200 million in a new round of funding to bring back extinct species like the woolly mammoth. Dallas- and Boston-based Colossal is making strides in the scientific breakthroughs toward "de-extinction," or bringing back extinct species like the woolly mammoth, thylacine and the dodo. [...] Since launching in September 2021, Colossal has raised $435 million in total funding. This latest round of capital places the company at a $10.2 billion valuation. Colossal will leverage this latest infusion of capital to continue to advance its genetic engineering technologies while pioneering new revolutionary software, wetware and hardware solutions, which have applications beyond de-extinction including species preservation and human healthcare.

"Our recent successes in creating the technologies necessary for our end-to-end de-extinction toolkit have been met with enthusiasm by the investor community. TWG Global and our other partners have been bullish in their desire to help us scale as quickly and efficiently as possible," said CEO Colossal Ben Lamm, in a statement. "This funding will grow our team, support new technology development, expand our de-extinction species list, while continuing to allow us to carry forth our mission to make extinction a thing of the past."
Here's a summary of the startup's progress on its efforts to bring back the woolly mammoth, thylacine and the dodo:

Woolly Mammoth De-extinction Progress
- Generated chromosome-scale reference genomes for elephants and the first de novo assembled mammoth genome
- Acquired and aligned 60+ ancient mammoth genomes and 30+ genomes of extant elephant species, improving mammoth-specific variant accuracy
- Derived pluripotent stem cells for Asian elephants, advancing reproductive technologies essential for de-extinction

Thylacine De-extinction Progress
- Created a 99.9% complete ancient genome for the thylacine using long-read and RNA sequencing
- Assembled telomere-to-telomere genomes of dasyurid species to understand evolutionary relationships and support conservation of marsupials
- Progress in genomics and reproductive technologies positions Colossal ahead of schedule on critical de-extinction steps

Dodo De-extinction Progress
- Completed high-coverage genomes for the dodo, its relatives, and the critically endangered manumea
- Developed tools for avian genome engineering, including techniques for craniofacial gene-editing and primordial germ cell cultivation
- Significant advances in avian-specific genetic techniques are driving progress toward dodo restoration and bird conservation

Slashdot Top Deals