Chinese have worked or studied abroad and then returned home long enough that there's a term for them -- "sea turtles." But while a job at a U.S. tech giant once conferred near-unparalleled status, homegrown companies -- from giants like Tencent to up-and-comers like news giant Toutiao -- are now often just as prestigious. Baidu Inc. -- a search giant little-known outside of China -- convinced ex-Microsoft standout Qi Lu to helm its efforts in AI, making him one of the highest-profile returnees of recent years.
The Verge adds: Google is thought to be losing money on every unit of the Home Mini; Reuters reported on one analysis that pegged the device's parts alone at $26, not including the cost of developing the entire thing, supporting it, advertising it, shipping it, and so on. Of course, Google is in this for the long game -- the Assistant is an attempt to make sure Google remains the way people get information, and Google has plenty of options to make money through ads or the data it collects in the future...
Amazon is also believed to be losing money on the Echo Dot, which was similarly cut to $29 during the holiday season. Amazon never gives out specific sales figures, but it did say that "tens of millions" of its own Alexa-enabled devices were sold over the holidays, with the Echo Dot being one of the top sellers... These super cheap prices are getting people to buy smart speakers and commit to an ecosystem. These companies are clearly happy to spend a few dollars gaining customers in the short term so that they have an enormous audience available to them down the road.
These stickers were created so that the algorithm finds them 'more interesting' than the rest of the image and will focus most of it's attention on analyzing the pattern, while giving the rest of the image content a lower importance, thus ignoring it or confusing it.
The technique "works in the real world, and can be disguised as an innocuous sticker," note the researchers -- describing them as "targeted adversarial image patches."
At a glance, the way NEMESIS works is relatively simple. There's an "inference graph," which is a mathematical representation of trained images, classified as Nazi or white supremacist symbols. This inference graph trains the system with machine learning to identify the symbols in the wild, whether they are in pictures or videos. In a way, NEMESIS is dumb, according to Crose, because there are still humans involved, at least at the beginning. NEMESIS needs a human to curate the pictures of the symbols in the inference graph and make sure they are being used in a white supremacist context. For Crose, that's the key to the whole project -- she absolutely does not want NEMESIS to flag users who post Hindu swastikas, for example -- so NEMESIS needs to understand the context. "It takes thousands and thousands of images to get it to work just right," she said.
You have access to 24 hour if-it-bleeds-it-leads news. You have access to the incredibly important tweets and selfies people post, and the equally important Youtube comments under the latest Taylor Swift or rap video. You read Slashdot as well. Every day.
What kind of poem do you, great AI poetry engine, write based on these inputs?
One article even describes the possibility of malevolent brain-brain networks in the future, warning scientists (and the international community) to "remain vigilant about neurotechnologies as they become more refined -- and as the practical barriers to their malevolent use begin to lower."