×
Google

Google Drive Will Stop Syncing With Google Photos In July (digitaltrends.com) 48

In an effort to create a "simplified experience," Google said in a blog post today that Google Photos will stop syncing to Drive in July. Digital Trends reports: The change is sure to be controversial. For many, the fact that Photos automatically syncs to Google Drive is a favorite feature, as it allows for much easier organization of photos. Of course, the change will avoid some confusion. According to Google, the change is aimed at helping "prevent accidental deletions of items across products." In other words, it seems like some users were confused about the fact that deleting a copy of a photo in Photos also means that the image is deleted in Drive, and vice versa. The blog post notes that the two services will still work together to an extent. The company announced a new feature called "Upload from Drive," which will allow users to manually select photos and videos to be imported into Photos. Once the items are uploaded, the files won't be connected, so you can delete the file in one without it being removed in the other.

Additionally, Backup and Sync will continue to work on both Windows and Mac, "so if you store your photos locally and want to then sync them to either Google Drive or Google Photos, you'll still be able to do so," reports Digital Trends. Google also notes that existing photos and videos will stay in both Photos and Drive, but the Google Photos folder in Drive will no longer update automatically.
Power

Tesla Says Solar Roof Is On Its Third Iteration, Currently Installing In 8 States (techcrunch.com) 133

An anonymous reader quotes a report from TechCrunch: Tesla is currently installing its solar roof product in eight states, according to Elon Musk, speaking at the Tesla Annual Shareholder Meeting on Tuesday. The solar roof-tile project has had a relatively long genesis since being unveiled three years ago, in 2016. In addition to having installations run in eight states, Musk said the solar roof product is currently on version three, and that this version is very exciting to him because it offers a chance of being at cost parity with an equivalent entry-level cheap traditional tile, when you include the cost of utilities you'd be saving by generating your own power instead. Regarding timelines for wider rollout of the solar roof products at the costs he anticipates, his own words probably say it best: "I'm sometimes a little optimistic about time frames -- it's time you knew," he joked at the meeting.
Power

US Report Finds Sky Is the Limit For Geothermal Energy Beneath Us (arstechnica.com) 154

An anonymous reader quotes a report from Ars Technica: Geothermal power sources come in many forms, and they're typically much more subtle than steam shooting out of the ground. In reality, geothermal energy could be a big player in our future mix. That is made clear by the U.S. Department of Energy's recently released "GeoVision" report. The report follows similar evaluations of wind, solar, and hydropower energy and leans on information from national labs and other science agencies. It summarizes what we know about the physical resources in the U.S. and also examines the factors that have been limiting geothermal's deployment. Overall, the report shows that we could do a whole lot more with geothermal energy -- both for generating electricity and for heating and cooling -- than we currently do.

There are opportunities to more than double the amount of electricity generated at conventional types of hydrothermal sites, where wells can easily tap into hot water underground. That's economical on the current grid. But the biggest growth potential, according to the report, is in so-called "enhanced geothermal systems." These involve areas where the temperatures are hot but the bedrock lacks enough fractures and pathways for hot water to circulate freely -- or simply lacks the water entirely. Advancing enhanced geothermal techniques alone could produce 45 gigawatts of electricity by 2050. Add in the more conventional plants, and you're at 60 gigawatts -- 26 times more than current geothermal generation. And in a scenario where natural gas prices go up, making geothermal even more competitive, we could double that to 120 gigawatts. That would be fully 16 percent of the total projected 2050 generation in the U.S.
The report also estimates that installations of traditional ground-source heat pumps, which circulate fluid through loops in the ground to provide cooling in the summer and heating in the winter, could be increased 14 times over, to 28 million homes by 2050, "covering 23 percent of national residential demand." When factoring in the limitations for how quickly the market could realistically change, the number only goes down to 19 million homes -- still a massive increase.

Meanwhile, district heating systems, where a single, large geothermal installation pipes heat to all the buildings in an area, could be more widely deployed to more than 17,000 locations, covering heating needs for 45 million homes.
Power

Hydrogen Station Explodes, Toyota Halts Sales of Hydrogen Cars In Norway (electrek.co) 217

Socguy writes: The Uno-X hydrogen station in Sandvika in Baerum exploded on Monday and resulted in two injuries in a nearby non-fuel cell vehicle. The company operating the station has suspended operation at its other locations following the explosion. With the refueling network crippled, Toyota and Hyundai have announced that they are temporarily halting sales of fuel cell vehicles. Jon Andre Lokke, CEO of Nel Hydrogen, the company operating those hydrogen refueling stations, commented: "It is too early to speculate on the cause and what has gone wrong. Our top priority is the safe operation of the stations we have delivered. As a precaution, we have temporarily put ten other stations in standby mode in anticipation of more information."

Here's what Toyota Norway manager Espen Olsen had to say: "We don't know exactly what happened on the Uno-X drive yet, so we don't want to speculate. But we stop the sale until we have learned what has happened, and for practical reasons, since it is not possible to fill fuel now." He added: "This does not change our view of hydrogen, and it is important for us to point out that hydrogen cars are at least as safe as ordinary cars. The hydrogen tanks themselves are so robust that you can shoot them with a gun without knocking them."
AMD

AMD Unveils Zen 2 CPU Architecture, Navi GPU Architecture and a Slew of Products (hothardware.com) 167

MojoKid writes: AMD let loose today with a number of high profile launches at the E3 2019 Expo in Los Angeles, CA. The company disclosed its full Zen 2 Ryzen 3000 series microarchitecture, which AMD claims offers an IPC uplift of 15% generation over generation, thanks to better branch prediction, higher integer throughput, and reduced effective latency to memory. Zen 2 also significantly beefs up floating point throughput with double the FP performance of the previous generation. AMD also announced a 16-core/32-thread variant, dubbed Ryzen 3950X, that drops at $750 -- a full $950 cheaper than a similar spec 16-core Intel Core i9-9960X. On the graphics side, AMD's RDNA architecture in Navi will power the company's new Radeon RX 5700 series, which is said to offer competitive performance to NVIDIA's GeForce RTX 2070 and 2060 series. The Navi-based GPU at the heart of the upcoming Radeon RX 5700 series is manufactured on TSMC's 7nm process node and features GDDR6 memory, along with PCI Express 4.0 interface support. Versus AMD's previous generation GCN (Graphics Core Next) architecture, RDNA delivers more than 50% better performance-per-watt and 25% better overall performance. Greater than 50% of that improvement comes from architecture optimizations according to AMD; the GPU also gets a boost from its 7nm process and frequency gains. Radeon RX 5700 and 5700 XT cards will be available in market on July 7th, along with AMD Ryzen 3000 chips, but pricing hasn't been established yet for the Radeon GPUs.
Power

The Lost History of Sodium Wiring 111

Long-time Slashddot reader Rei writes: On the face of it, sodium seems like about the worst thing you could make a wire out of — it oxidizes rapidly in air, releases hot hydrogen gas in water, melts at 97.8 degrees Centigrade, and has virtually no tensile strength. Yet, in the late 1960s and early 1970s, the Nacon Corporation did just that — producing thousands of kilometers of high-gauge sodium wiring for electrical utilities — and it worked surprisingly well.

While sodium has three times the (volumetric) resistivity of copper and nearly double that of alumium, its incredibly low density gives it a gravimetric resistivity less than a third of copper and half of alumium. Priced similar to alumium per unit resistivity (and much cheaper than copper), limitless, and with almost no environmental impact apart from its production energy consumption, sodium wiring proved to be much more flexible without the fatigue or installation damage risks of alumium. The polyethylene insulation proved to offer sufficient tensile strength on its own to safely pull the wire through conduits, while matching its thermal expansion coefficient. The wiring proved to have tamer responses to both over-current (no insulation burnoff) and over-voltage (high corona inception voltage) scenarios than alumium as well. Meanwhile, "accidental cutting" tests, such as with a backhoe, showed that such events posed no greater danger than cutting copper or alumium cabling. Reliability results in operation were mixed — while few reliability problems were reported with the cables themselves, the low-voltage variety of Nacon cables appeared to have unreliable end connectors, causing some of the cabling to need to be repaired during 13 years of utility-scale testing.

Ultimately, it was economics, not technical factors, that doomed sodium wiring. Lifecycle costs, at 1970s pricing, showed that using sodium wiring was similar to or slightly more expensive for utilities than using alumium. Without an unambiguous and significant economic case to justify taking on the risks of going larger scale, there was a lack of utility interest, and Nacon ceased production.
Power

Should the UK Re-Open An Old, Cracked Nuclear Reactor? (mirror.co.uk) 264

"Nuclear experts have warned against re-opening a 43-year-old Scottish nuclear reactor riddled with cracks over fears of a meltdown," writes the Daily Mirror.

An anonymous reader quotes their report: Hunterston B nuclear power plant was shut down last year after it was found that Reactor 3 had almost 400 cracks in it -- exceeding the operational limit. EDF, which own the plant in Ardrossan, Ayrshire, are pushing to return the reactor to service at the end of June and July and want to extend the operational limit of crack allowed from 350 to 700. However, the plans to reopen the plant have sparked fears it could lead to a nuclear meltdown similar to the 1986 Chernoybl disaster.

Experts have warned that in the very worst case the hot graphite core could become exposed to air and ignite leading to radioactive contamination and evacuation of a large area of Scotland's central belt -- including Glasgow and Edinburgh. According to Dr Ian Fairlie, an independent consultant on radioactivity in the environment, and Dr David Toke, Reader in Energy Policy at the University of Aberdeen, the two reactors definitely should not be restarted...

The plant, which is more than 40 years old, can generate enough electricity to power more than 1.7 million homes, and is one of Britain's eight nuclear plants which provide around 20 percent of the country's electricity.

Nuclear expert Professor Neil Hyat reminds The Sun that the reactor will be shut down by 2030 -- and "possibly earlier."
Hardware Hacking

Maker Faire and Make Magazine Have Laid Off All Staff and Paused All Operations (techcrunch.com) 117

McGruber quotes TechCrunch: Maker Media Inc ceased operations this week and let go of all of its employees — about 22 employees" founder and CEO Dale Dougherty told TechCrunch. "I started this 15 years ago and it's always been a struggle as a business to make this work. Print publishing is not a great business for anybody, but it works . . . barely. Events are hard . . . there was a drop off in corporate sponsorship." Microsoft and Autodesk failed to sponsor this year's flagship Bay Area Maker Faire.

But Dougherty is still desperately trying to resuscitate the company in some capacity, if only to keep MAKE:'s online archive running and continue allowing third-party organizers to license the Maker Faire name to throw affiliated events. Rather than bankruptcy, Maker Media is working through an alternative Assignment for Benefit of Creditors process.

"We're trying to keep the servers running" Dougherty tells me. "I hope to be able to get control of the assets of the company and restart it. We're not necessarily going to do everything we did in the past but I'm committed to keeping the print magazine going and the Maker Faire licensing program." The fate of those hopes will depend on negotiations with banks and financiers over the next few weeks. For now the sites remain online.

Earth

Researchers Propose Solar Methanol Island Using Ocean CO2 (arstechnica.com) 251

A PNAS paper published this week outlines a plan to establish 70 islands of solar panels, each 328 feet in diameter, that sends electricity to a hard-hulled ship that acts as an oceanic factory. "This factory uses desalinization and electrolysis equipment to extract hydrogen gas (H2) and carbon dioxide (CO2) from the surrounding ocean water," reports Ars Technica. "It then uses these products to create methanol, a liquid fuel that can be added into, or substituted for, transportation fuels. Every so often, a ship comes to offload the methanol and take it to a supply center on land." From the report: The researchers estimated that we would need approximately 170,000 of these solar island systems to be able to produce enough green methanol to replace all fossil fuels used in long-haul transportation. While that seems like a lot, it's theoretically possible, even if we restrict these systems to ocean expanses where waves don't reach more than seven feet high and there's enough sunlight to meet the system's yearly average need.

Still, the authors admit that this is just the description of a possible prototype: whether it's practical to build or not will depend on the cost of the technology that supports the system, as well as the cost of competing forms of energy used in transportation. Cleaning and maintaining this equipment in a marine environment is also a concern, and the researchers admit that there may be room for alternate setups (like making another fuel instead of methanol) that might make more economic sense. For now, though, it's a compelling idea to avoid additional fossil fuel extraction that is within reach using existing technology.

AI

Training a Single AI Model Can Emit As Much Carbon As Five Cars In Their Lifetimes 156

In a new paper, researchers at the University of Massachusetts, Amherst, performed a life cycle assessment for training several common large AI models. They found that the process can emit more than 626,000 pounds of carbon dioxide equivalent -- nearly five times the lifetime emissions of the average American car (and that includes manufacture of the car itself). MIT Technology Review reports: The researchers looked at four models in the field that have been responsible for the biggest leaps in performance: the Transformer, ELMo, BERT, and GPT-2. They trained each on a single GPU for up to a day to measure its power draw. They then used the number of training hours listed in the model's original papers to calculate the total energy consumed over the complete training process. That number was converted into pounds of carbon dioxide equivalent based on the average energy mix in the US, which closely matches the energy mix used by Amazon's AWS, the largest cloud services provider.

They found that the computational and environmental costs of training grew proportionally to model size and then exploded when additional tuning steps were used to increase the model's final accuracy. In particular, they found that a tuning process known as neural architecture search, which tries to optimize a model by incrementally tweaking a neural network's design through exhaustive trial and error, had extraordinarily high associated costs for little performance benefit. Without it, the most costly model, BERT, had a carbon footprint of roughly 1,400 pounds of carbon dioxide equivalent, close to a round-trip trans-American flight. What's more, the researchers note that the figures should only be considered as baselines.
Using a model they'd produced in a previous paper as a case study, the researchers "found that the process of building and testing a final paper-worthy model required training 4,789 models over a six-month period," the report states. "Converted to CO2 equivalent, it emitted more than 78,000 pounds and is likely representative of typical work in the field."
Robotics

Boston Dynamics Prepares To Launch Its First Commercial Robot: Spot (theverge.com) 52

Boston Dynamics is about to launch its first ever commercial product -- a quadrupedal robot named Spot. The Verge reports: Spot is currently being tested in a number of "proof-of-concept" environments, Boston Dynamics' CEO Marc Raibert told The Verge, including package delivery and surveying work. And although there's no firm launch date for the commercial version of Spot, it should be available within months, said Raibert, and certainly before the end of the year. "We're just doing some final tweaks to the design," said the CEO. "We've been testing them relentlessly."

Rather than selling the robot as a single-use tool, it's positioning it as a "mobility platform" that can be customized by users to complete a range of tasks. A Spot robot mounted with 3D cameras can map environments like construction sites, identifying hazards and work progress. When equipped with a robot arm, it has even greater flexibility, able to open doors and manipulate objects. At Re:MARS, a Spot with a robot arm used it to pick up items, including a cuddly toy that was then offered to a flesh-and-blood police dog. The dog was unimpressed with the robot, but happy, at least, to receive the toy. Raibert says it's this "athletic intelligence" that Boston Dynamics will be selling through its robots. Think of it like Amazon's AWS business, but instead of offering computing power on tap, its robotic mobility.
How much will Spot cost? Raibert only said that the commercial version will be "much less expensive than prototypes [and] we think they'll be less expensive than other peoples' quadrupeds."

He did, however, reveal that the company had already found some paying customers, including construction companies in Japan who are testing Spot as a way to oversee the progress of work on sites.
Robotics

Ikea Is Introducing Robotic Furniture For People Who Live In Small Spaces (theverge.com) 121

Ikea has partnered with American furniture startup Ori Living to develop a new robotic furniture system for people living in small spaces. Called Rognan, the collection includes a large storage unit that can slide across a room via a touchpad to divide a room into two living spaces, a bed, desk, and a couch for people to pull out when needed. It will launch first in Hong Kong and Japan in 2020. The Verge reports: Rognan is built on Ori's robotic platform, and works with Ikea's Platsa line of storage furniture. It's also compatible with Ikea's Tradfri line of cabinet and wardrobe smart lighting. Ikea says the Rognan can save an extra eight square meters (about 86 square feet) of living space. That might not sound like much, but if you live in a tiny home, it could make all the difference. The Verge notes that Ori's line of automated furniture started as a concept from MIT's CityHome concept project in 2014. It launched for real estate developers and Airbnbs for $10,000 as Ori Systems.
Cellphones

Get Ready For Under-Display Smartphone Cameras (arstechnica.com) 73

An anonymous reader quotes a report from Ars Technica: With in-screen fingerprint readers quickly becoming a regular feature of flagship phones, manufacturers are starting to wonder about what other things they can stick under the display. In the past day, both Oppo and Xiaomi have taken to social media to show off the latest development: under-display front-facing cameras. Forget camera notches, hole punch displays, and complicated pop-up mechanisms; the under-display camera enables all-screen smartphone designs with no moving parts.

Under-display cameras will work a lot like optical under-display fingerprint readers -- a CMOS chip will be placed under a transparent section of the display, and it will peer through the pixels to see the outside world. For an optical fingerprint reader, the image capturing setup only needs to be of high enough quality to identify the ridges and valleys of your fingertip. For selfies and video chats, there will be much higher demands for image quality, and we wonder what obstructing the camera view with pixels will do to the image quality. Both Xiaomi and Oppo shared videos of the in-display cameras working, but the videos are too low quality to make any kind of image quality determinations. When you aren't taking a picture, the display pixels work normally, and when it's picture time, the pixels around the camera turn off, allowing the camera to see through the display. Xiaomi detailed some of its implementation, saying it was using a "special low-reflective glass" for better image quality.

Displays

Apple Unveils 6K 'Pro Display XDR' Monitor That Starts At $5,000 (cnet.com) 237

One of the most ridiculous announcements made at Apple's WWDC on Monday was the new Pro Display XDR monitor. It's a monitor made to pair with the new Mac Pro, complete with top-level specs and a staggering $5,000 starting price. CNET reports: The monitor's chief feature is high-dynamic range, aka HDR. Doing HDR correctly requires a lot of horsepower to illuminate the screen, and the XDR monitor can get exceedingly bright -- and stay that way. Apple says an advanced cooling system can maintain its 1,000 nits brightness "indefinitely." The monitor has a full-array backlight with 576 zones of full array local dimming -- more than just about any similarly equipped TV available. That advanced dimming tech likely contributes to the incredibly high 1,000,000:1 contrast ratio specification.

At 32 inches and a resolution of 6,016 x 3,384, the Pro Display XDR is Apple's largest retina display ever. While not used in many TVs (which are either 4K or 8K), the 6K resolution is increasingly popular for video capture, with cameras like the Pansonic Lumix S1H, Sony Venice, and models from Red doing 6K. Apple has also improved the screen to better control reflections and offers a new matte option called "nano-texture, with glass etched at the nanometer level for low reflectivity and less glare." The matte option brings the price of the monitor up to $6,000. Apple also talks up its polarizer technology and wide off-axis viewing angle. Pre-set reference modes include HDR video (P3-ST 2084), Digital Cinema (P3-DCI) and Photography (P3-D65).
In traditional Apple fashion, the Pro Display XDR does not ship with a stand -- you'll have to buy that separately. The optional $999 Pro stand allows users to articulate the screen and place it in various positions. It has tilt, height, and rotation adjustment, meaning you can rotate it from landscape to portrait mode, juts like your iPhone.

Apple is also selling a VESA mount adapter for $199, but that will require you to buy another third-party stand.
IOS

iOS 13: Apple Brings Dark Mode To iPhones and Multitasking Overhaul To iPads (arstechnica.com) 51

An anonymous reader quotes a report from Ars Technica: iOS 13 will introduce Dark Mode to iPhones, iPads, and iPods for the first time. Apple brought Dark Mode to Macs via macOS Mojave last year, to much fanfare. As was the case there, Dark Mode doesn't actually change anything about the interface -- just the aesthetics. Apple showed Dark Mode running on the company's first-party apps for news, calendar, messages, and more. Dark Mode may also save battery life on devices with emissive OLED displays -- savings like that were discovered in our own tests comparing Android devices with LCD and OLED displays. But we'll have to test the new OS to be sure.

Every iOS update brings changes to key apps made by Apple itself, and most of the apps included with a new installation of iOS have seen some changes. Mail now allows you to mute certain conversations. Maps has a new, easier way of accessing saved locations. The upgrade to Apple Maps will bring far more detail to the overhead view of roads and landmarks, with this rolling out to the entire United States by the end of 2019 and "select countries" next year. Reminders has seen a ground-up interface overhaul, with natural-language processing similar to what's seen in third-party apps -- you'll be able to type the relevant details and Reminders will understand when and where the reminder should be set for. Apple is also adding a swipe-typing ability to its iOS keyboard for the first time, replicating something that has been available in third-party keyboards for years. Notes will have a new gallery view and support for shared folders. Safari will have new options to change text sizing, with per-website settings.
The iPad's multitasking UI has also been overhauled, bringing a new window-based experience and an easier way to switch between apps in Slide Over mode. You'll also be able to plug thumb drives into newer iPads with USB-C.
Open Source

NLNet Funds Development of a Libre RISC-V 3D CPU (crowdsupply.com) 75

The NLNet Foundation is a non-profit supporting privacy, security, and the "open internet". Now the group has approved funding for the hybrid Libre RISC-V CPU/VPU/GPU, which will "pay for full-time engineering work to be carried out over the next year, and to pay for bounty-style tasks."

Long-time Slashdot reader lkcl explains why that's significant: High security software is irrelevant if the hardware is fundamentally compromised, for example with the Intel spying backdoor co-processor known as the Management Engine. The Libre RISCV SoC was begun as a way for users to regain trust and ownership of the hardware that they legitimately purchase.

This processor will be the first of its kind, as the first commercial SoC designed to give users the hardware and software source code of the 3D GPU, Video Decoder, main processor, boot process and the OS.

Shockingly, in the year 2019, whilst there are dozens of SoCs with full source code that are missing either a VPU or a GPU (such as the TI OMAP Series and Xilinx ZYNQ7000s), there does not exist a single commercial embedded SoC which has full source code for the bootloader, CPU, VPU and GPU. The iMX6 for example has etnaviv support for its GPU however the VPU is proprietary, and all of Rockchip and Allwinner's offerings use either MALI or PowerVR yet their VPUs have full source (reverse engineered in the case of Allwinner).

This processor, which will be quad core dual issue 800mhz RV64GC and capable of running full GNU/Linux SMP OSes, with 720p video playback and embedded level 25fps 3D performance in around 2.5 watts at 28nm, is designed to address that imbalance. Links and details on the Libre RISC-V SoC wiki.

The real question is: why is this project the only one of its kind, and why has no well funded existing Fabless Semiconductor Company tried something like this before? The benefits to businesses of having full source code are already well-known.

Communications

The Invention of USB, 'The Port That Changed Everything' (fastcompany.com) 231

harrymcc shares a Fast Company article about "the generally gnarly process once required to hook up peripherals" in the late 1990s -- and one Intel engineer who saw the need for "one plug to rule them all." In the olden days, plugging something into your computer -- a mouse, a printer, a hard drive -- required a zoo of cables. Maybe you needed a PS/2 connector or a serial port, the Apple Desktop Bus, or a DIN connector; maybe a parallel port or SCSI or Firewire cable. If you've never heard of those things, and if you have, thank USB.

When it was first released in 1996, the idea was right there in the first phrase: Universal Serial Bus. And to be universal, it had to just work. "The technology that we were replacing, like serial ports, parallel ports, the mouse and keyboard ports, they all required a fair amount of software support, and any time you installed a device, it required multiple reboots and sometimes even opening the box," says Ajay Bhatt, who retired from Intel in 2016. "Our goal was that when you get a device, you plug it in, and it works."

It was at Intel in Oregon where engineers made it work, at Intel where they drummed up the support of an industry that was eager to make PCs easier to use and ship more of them. But it was an initial skeptic that first popularized the standard: in a shock to many geeks in 1998, the Steve Jobs-led Apple released the groundbreaking first iMac as a USB-only machine. The faster speeds of USB 2.0 gave way to new easy-to-use peripherals too, like the flash drive, which helped kill the floppy disk and the Zip drive and CD-Rs. What followed was a parade of stuff you could plug in: disco balls, head massagers, security keys, an infinity of mobile phone chargers. There are now by one count six billion USB devices in the world.

The article includes a thorough oral history of USB's development, and points out there's now also a new reversible Type-C cable design. And USB4, coming later this year, "will be capable of achieving speeds upwards of 40Gbps, which is over 3,000 times faster than the highest speeds of the very first USB."

"Bhatt couldn't have imagined all of that when, as a young engineer at Intel in the early '90s, he was simply trying to install a multimedia card."
Earth

Robot Boat Wins $4 Million Ocean Floor Mapping XPRIZE (bbc.com) 19

"A robotic boat and submersible have won the XPRIZE to find the best new technologies to map the seafloor," writes the BBC -- taking home the grand prize of $4 million.

dryriver shares their report: The surface and underwater combo demonstrated their capabilities in a timed test in the Mediterranean, surveying depths down to 4km. [2.48 miles -- slightly deeper than the ocean's average depth of 2.3 miles.] Put together by the international GEBCO-NF Alumni team, the autonomous duo are likely now to play a role in meeting the "Seabed 2030" challenge. This aims to have Earth's ocean floor fully mapped to a high standard. Currently, only 20% of the world's sub-surface topography has been resolved to an acceptable level of accuracy...

The group triumphed by packaging an existing, state-of-the-art solution with a novel twist. So, while its HUGIN autonomous underwater vehicle (AUV) is an established industry tool for echo-sounding the depths, its uncrewed surface vessel (USV) that deployed and recovered the sub was developed specially for the competition... On arrival, the chosen technologies had just 24 hours to make an extensive, high-resolution (5m or better) bathymetric (depth) map; and take multiple pictures of the seabed. The GEBCO-NF Alumni team covered 278 sq km in its allotted time, returning more than 10 images of identifiable geological features.

Music

Teen Makes His Own AirPods For $4 (vice.com) 123

samleecole writes: Apple's AirPods are a tragedy. Ecologically, socially, economically -- they're a capitalist disaster. The opposite of AirPods, then, is this extremely punk pair of DIY wireless earbuds that someone on Reddit hacked together using an old pair of wired Apple headphones and some hot glue. "I started this project roughly two months ago when my friend got a new pair of AirPods for his birthday and I thought to myself, 'that's quite a lot of money for something I can make at home,'" Sam Cashbook, who is 15, told Motherboard in a Reddit message.

Cashook started watching videos of people making their own AirPods, but mostly found people chopping the wires off of Apple headphones as a joke. He decided to take his own approach. He bought a hands-free bone conduction headset from eBay, and took apart the casing to reveal the electronics. Then, he desoldered the wires from the original speaker in the headset, and connected his old Apple earbud speaker to the headset's printed circuit board. Maybe a little uglier, but the headphones work well, he said. The set has buttons for power, pausing music, volume controls and skipping tracks, and the battery is rechargeable.

Graphics

Ask Slashdot: Why Is 3D Technology Stagnating So Badly? 188

dryriver writes: If you had asked someone doing 3D graphics seriously back in 2000 what 3D technology will look like two decades away in 2019, they might have said: "Most internet websites will have realtime 3D content embedded or will be completely in 3D. 3D Games will look as good as movies or reality. Everyone will have a cheap handheld 3D scanner to capture 3D models with. High-end VR headsets, gloves, bodysuits and haptics devices will be sold in electronics stores. Still and video cameras will be able to capture true holographic 3D images and video of the real world. TVs and broadcast TV content will be in holographic 3D. 3D stuff you create on a PC will be realtime -- no more waiting for images to slowly render thanks to really advanced new 3D hardware. 3D content creation software will be incredibly advanced and fast to work with in 2019. Many new types of 3D input devices will be available that make working in 3D a snap."

Except of course that that in the real 2019, none of this has come true at all, and the entire 3D field has been stagnating very, very badly since around 2010. It almost seems like a small army of 3D technology geniuses pushed and pushed 3D software and hardware hard during the 80s, 90s, 2000s, then retired or dropped off the face of the earth completely around 10 years ago. Why is this? Are consumers only interested in Facebook, YouTube, cartoony PlayStation graphics and smartphones anymore? Are we never going to see another major 3D technology innovation push again?

Slashdot Top Deals