Nvidia’s researchers teach a robot to perform simple tasks by observing a human

Industrial robots are typically all about repeating a well-defined task over and over again. Usually, that means performing those tasks a safe distance away from the fragile humans that programmed them. More and more, however, researchers are now thinking about how robots and humans can work in close proximity to humans and even learn from them. In part, that’s what Nvidia’s new robotics lab in Seattle focuses on and the company’s research team today presented some of its most recent work around teaching robots by observing humans at the International Conference on Robotics and Automation (ICRA), in Brisbane, Australia.

Nvidia’s director of robotics research Dieter Fox.

As Dieter Fox, the senior director of robotics research at Nvidia (and a professor at the University of Washington), told me, the team wants to enable this next generation of robots that can safely work in close proximity to humans. But to do that, those robots need to be able to detect people, tracker their activities and learn how they can help people. That may be in small-scale industrial setting or in somebody’s home.

While it’s possible to train an algorithm to successfully play a video game by rote repetition and teaching it to learn from its mistakes, Fox argues that the decision space for training robots that way is far too large to do this efficiently. Instead, a team of Nvidia researchers led by Stan Birchfield and Jonathan Tremblay, developed a system that allows them to teach a robot to perform new tasks by simply observing a human.

The tasks in this example are pretty straightforward and involve nothing more than stacking a few colored cubes. But it’s also an important step in this overall journey to enable us to quickly teach a robot new tasks.

The researchers first trained a sequence of neural networks to detect objects, infer the relationship between them and then generate a program to repeat the steps it witnessed the human perform. The researchers say this new system allowed them to train their robot to perform this stacking task with a single demonstration in the real world.

One nifty aspect of this system is that it generates a human-readable description of the steps it’s performing. That way, it’s easier for the researchers to figure out what happened when things go wrong.

Nvidia’s Stan Birchfield tells me that the team aimed to make training the robot easy for a non-expert — and few things are easier to do than to demonstrate a basic task like stacking blocks. In the example the team presented in Brisbane, a camera watches the scene and the human simply walks up, picks up the blocks and stacks them. Then the robot repeats the task. Sounds easy enough, but it’s a massively difficult task for a robot.

To train the core models, the team mostly used synthetic data from a simulated environment. As both Birchfield and Fox stressed, it’s these simulations that allow for quickly training robots. Training in the real world would take far longer, after all, and can also be more far more dangerous. And for most of these tasks, there is no labeled training data available to begin with.

“We think using simulation is a powerful paradigm going forward to train robots do things that weren’t possible before,” Birchfield noted. Fox echoed this and noted that this need for simulations is one of the reasons why Nvidia thinks that its hardware and software is ideally suited for this kind of research. There is a very strong visual aspect to this training process, after all, and Nvidia’s background in graphics hardware surely helps.

Fox admitted that there’s still a lot of research left to do be done here (most of the simulations aren’t photorealistic yet, after all), but that the core foundations for this are now in place.

Going forward, the team plans to expand the range of tasks that the robots can learn and the vocabulary necessary to describe those tasks.

Meet Alchemist Accelerator’s latest demo day cohort

An IoT-enabled lab for cannabis farmers, a system for catching drones mid-flight and the Internet of Cows are a few of the 17 startups exhibiting today at Alchemist Accelerator’s 18th demo day. The event, which will be streamed live here, focuses on big data and AI startups with an enterprise bent.

The startups are showing their stuff at Juniper’s Aspiration Dome in Sunnyvale, California at 3pm today, but you can catch the whole event online if you want to see just what computers and cows have in common. Here are the startups pitching onstage.

Tarsier – Tarsier has built AI computer vision to detect drones. The founders discovered the need while getting their MBAs at Stanford, after one had completed a PhD in aeronautics. Drones are proliferating. And getting into places they shouldn’t — prisons, R&D centers, public spaces. Securing these spaces today requires antiquated military gear that’s clunky and expensive. Tarsier is all software. And cheap, allowing them to serve markets the others can’t touch.

Lightbox – Retail 3D is sexy — think virtual try-ons, VR immersion, ARKit stores. But creating these experiences means creating 3D models of thousands of products. Today, artists slog through this process, outputting a few models per day. Lightbox wants to eliminate the humans. This duo of recent UPenn and Stanford Computer Science grads claim their approach to 3D scanning is pixel perfect without needing artists. They have booked $40,000 to date and want to digitize all of the world’s products.

Vorga – Cannabis is big business — more than $7 billion in revenue today and growing fast. The crop’s quality — and a farmer’s income — is highly sensitive to a few chemicals in it. Farmers today test the chemical composition of their crops through outsourced labs. Vorga’s bringing the lab in-house to the cannabis farmer via their IoT platform. The CEO has a PhD in chemical physics, and formerly helped the Department of Defense keep weapons of mass destruction out of the hands of terrorists. She’s now helping cannabis farmers get high… revenue.

Neulogic – Neulogic is founded by a duo of Computer Science PhDs that led key parts of Walmart.com product search. They now want to solve two major problems facing the online apparel industry: the need to provide curated inspiration to shoppers and the need to offset rising customer acquisition costs by selling more per order. Their solution combines AI with a fashion knowledge graph to generate outfits on demand.

Intensivate – Life used to be simple. Enterprises would use servers primarily for function-driven applications like billing. Today, servers are all about big data, analytics and insight. Intensivate thinks servers need a new chip upgrade to reflect that change. They are building a new CPU they claim gets 12x the performance for the same cost. Hardware plays like this are hard to pull off, but this might be the team to do it. It includes the former co-founder and CEO of CPU startup QED, which was acquired for $2.3 billion, and a PhD in parallel computation who was on the design team for the Alpha CPU from DEC.

Integry – SaaS companies put a lot of effort into building out integrations. Integry provides app creators their own integrations marketplace with pre-boarded partners so they can have apps working with theirs from the get go. The vision is to enable app creators to mimic their own Slack app directory without spending the years or the millions. Because these integrations sit inside their app, Integry claims setup rates are significantly better and churn is reduced by as much as 40 percent.

Cattle Care – AI video analytics applied to cows! Cattle Care wants to increase dairy farmers’ revenue by more than $1 million per year and make cows healthier at the same time. The product identifies cows in the barn by their unique black and white patterns. Algorithms collect parameters such as walking distance, interactions with other cows, feeding patterns and other variables to detect diseases early. Then the system sends alerts to farm employees when they need to take action, and confirms the problem has been solved afterwards.

VadR – VR/AR is grappling with a lack of engaging content. VadR thinks the cause is a broken feedback loop of analytics to the creators. This trio of IIT-Delhi engineers has built machine learning algorithms that get smarter over time and deliver actionable insights on how to modify content to increase engagement.

Tika – This duo of ex-Googlers wants to help engineering managers manage their teams better. Managers use Tika as an AI-powered assistant over Slack to facilitate personalized conversations with engineering teams. The goal is to quickly uncover and resolve employee engagement issues, and prevent talent churn.

GridRaster – GridRaster wants to bring AR/VR to mobile devices. The problem? AR/VR is compute-intensive. Latency, bandwidth and poor load balancing kill AR/VR on mobile networks. The solution? For this trio of systems engineers from Broadcom, Qualcomm and Texas Instruments, it’s about starting with enterprise use cases and building edge clouds to offload the work. They have 12 patents.

AitoeLabs – Despite the buzz around AI video analytics for security, AitoeLabs claims solutions today are plagued with hundreds of thousands of false alarms, requiring lots of human involvement. The engineering trio founding team combines a secret sauce of contextual data with their own deep models to solve this problem. They claim a 6x reduction in human monitoring needs with their tech. They’re at $240,000 ARR with $1 million of LOIs.

Ubiquios – Companies building wireless IoT devices waste more than $1.8 billion because of inadequate embedded software options making products late to market and exposing them to security and interoperability issues. The Ubiquios wireless stack wants to simplify the development of wireless IoT devices. The company claims their stack results in up to 90 percent lower cost and up to 50 percent faster time to market. Qualcomm is a partner.

4me, Inc. – 4me helps companies organize and track their IT outsourcing projects. They have 16 employees, 92 customers and generate several million in revenue annually. Storm Ventures led a $1.65 million investment into the company.

TorchFi – You know the pop-up screen you see when you log into a Wi-Fi hotspot? TorchFi thinks it’s a digital gold mine in the waiting. Their goal is to convert that into a sales channel for hotspot owners. Their first product is a digital menu that transforms the login screen into a food ordering screen for hotels and restaurants. Cisco has selected them as one of 20 apps to be distributed on their Meraki hotspots.

Cogitai – This team of 16 PhDs wants to usher in a more powerful type of AI called continual learning. The founders are the fathers of the field — and include professors in computer science from UT Austin and U Michigan. Unlike what we commonly think of as AI, Cogitai’s AI is built to acquire new skills and knowledge from experience, much like a child does. They have closed $2 million in bookings this year, and have $5 million in funding.

LoadTap – On-demand trucking apps are in vogue. LoadTap explicitly calls out that it is not one. This team, which includes an Apple software architect and founder with a family background in trucking, is an enterprise SaaS-only solution for shippers who prefer to work with their pre-vetted trucking companies in a closed loop. LoadTap automates matching between the shippers and trucking companies using AI and predictive analytics. They’re at $90,000 ARR and growing revenue 50 percent month over month.

Ondaka – Ondaka has built a VR-like 3D platform to render industrial information visually, starting with the oil and gas industry. For these industrial customers, the platform provides a better way to understand real-time IoT data, operational and job site safety issues and how reliable their systems are. The product launched two months ago, they have closed three customers already and are projecting ARR in the six figures. They have raised $350,000 in funding.

Watch every panel from TC Sessions: Robotics

Last week at UC Berkeley’s Zellerbach Hall, TechCrunch held its second TC Sessions: Robotics event. It was a full day of panels and demos, featuring the top minds in robotics, artificial intelligence and venture capital, along with some of the most cutting-edge demonstrations around.

If you weren’t able to attend, though, no worries; we’ve got the full event recorded for posterity, along with breakdowns of what you missed below.

Getting A Grip on Reality: Deep Learning and Robot Grasping

It turns out grasping objects is really hard for a robot. According to Ken Goldberg, professor and chair of the Industrial Engineering and Operations Research Department, it’s about forces and torques. He and TechCrunch Editor-in-Chief Matthew Panzarino also discussed what Goldberg calls “fog robotics.” Goldberg differentiates it from “cloud robotics” in that “you don’t want to do everything in the cloud because of latency issues and bandwidth limitations, quality of service – and there are also very interesting issues about privacy and security with robotics.”

The Future of the Robot Operating System

Fetch Robotics CEO Melonee Wise joined fellow Willow Garage ex-pats Brian Gerkey and Morgan Quigley to discuss Open Robotics’ Robot Operating System (ROS) efforts. The team is working to design and maintain an open and consistent framework for a broad range of different robotic systems.

Eyes, Ears, and Data: Robot Sensors and GPUs

NVIDIA Vice President Deepu Talla discussed how the chipmaker is making a central play in the AI and deep learning technologies that will drive robots, drones and autonomous vehicles of the future.

The Best Robots on Four Legs

Boston Dynamics CEO Marc Raibert announced onstage that the company’s 66-pound SpotMini robot will be available for purchase by the normals in 2019. Yes, one day you, too, will be able to have a dog robot perform services for you at the office or home.

Old MacDonald Needs a Robot

Agriculture is one of the next major fields for robotics, and we brought together some of the top startups in the field. Dan Steere of Abundant Robotics, Brandon Alexander of Iron Ox, Sébastien Boyer of Farmwise and Willy Pell of John Deere-owned Blue River Technology joined us on stage to discuss the ways in which robotics, artificial intelligence and autonomous systems will transform farm work in fields and orchards.

Teaching Robots New Tricks with AI

Pieter Abbeel is the Director of the UC Berkeley Robot Learning Lab and the co-founder of AI software company, covariant.ai. In a broad ranging discussion, Abbeel described the techniques his lab is using to teach robots how to better interact in human settings through repetition, simulation and learning from their own trial and error.

Can’t We All Just Get Along?

Ayanna Howard of Georgia Tech, Leila Takayama of UC Santa Cruz and Patrick Sobalvarro of Veo Robotics took part in an exploration of the ways in which humans and robots can collaborate in work and home settings. Getting there is a mix of safety and education on both the humans’ and robots’ behalf.

Demos from 254 Lockdown, 1678 Citrus Circuit, Pi Competition: Hercules

Robotics teams from Bellarmine College Preparatory, Davis High School and Hercules High School took to the stage before lunch time to show us what they have been working on. Each team built robots designed to tackle various tasks and the results are impressive.

Venture Investing in Robotics

Renata Quintini of Lux Capital, Rob Coneybeer of Shasta Ventures, and Chris Evdemon of Sinovation Ventures all discussed the excitement around startups venturing into the robotics industry, but were also quite candid about the difficulty robotics founders face who are unfamiliar with a particular industry that they hope could reshape their innovation.

Betting Big on Robotics

Andy Rubin has had a lifelong fascination with robotics. In fact, it was his nickname during his time at Apple that gave the Android operating system its name. After a stint heading a robotics initiative at Google, Rubin is using his role as a cofounder of Playground Global to fund some of the most fascinating robotics startups around. In a one-on-one discussion, Rubin talked about why robotics are a good long- and short-term investment, and why one particular long-legged robot could be the future of package delivery.

From the Lab Bench to Term Sheet

This cute little robot from Mayfield Robotics can blink, play music, turn its head and recharge itself. It can also just stay put to take pictures of you and live-stream your daily life. Yep. It watches you. Its name is Kuri and it can be your little buddy to always remind you that you never have to be alone.

Agility Robotics demonstration of Cassie

Agility Robotics’ bipedal humanoid robot was designed with bird legs in mind. But it wasn’t yet designed with arms. The company’s CTO Jonathan Hurst says those are to come. It’ll cost you $35,000 when it’s in full production mode. Custom deliveries started in August 2017 to a select few universities — University of Michigan, Harvard and Caltech, and Berkeley just bought its own. Although we didn’t see an example of this application, Cassie can apparently hold the body weight of a reasonably sized human.

Autonomous Systems

Safety has long been the focus of the push toward self-driving systems. Recent news stories, however, have cast a pall on the technology, leading many to suggest that companies have pushed to introduce it too quickly on public streets. Oliver Cameron of Voyage and Alex Rodrigues of Embark Trucks joined us to discuss these concerns and setbacks, as well as how the self-driving industry moves forward from here.

Teaching Intelligent Machines

NVIDIA is working to help developers create robots and artificial intelligent systems. Vice president of Engineering Claire Delaunay discussed how the company is creating the tools to help democratize the creation of future robotics.

The Future of Transportation

Chris Urmson has been in the self-driving car game for a long time. He joined Google’s self-driving car team in 2009, becoming head of the project four years later. These days, he’s the CEO of Aurora, a startup that has logged a lot of hours testing its own self-driving tech on the roads. Urmson discussed the safety concerns around the technology and how far out we are from self-driving ubiquity.

Demos of RoMeLa’s NABi and ALPHRED

Humans are bipedal, so why is it so hard to replicate that in a robot, asks Dennis Hong, professor and founding director of RoMeLa (Robotics & Mechanisms Library) of the Mechanical & Aerospace Engineering Department at UCLA. One of the reasons he said is because the distance between the left and right legs creates a twisting movement that renders forward and backward movement difficult. The resolution is to have them walk sideways. No twisting. So the team developed NABi (non-anthropomorphic biped), a bipedal locomotion robot with no “feet” or “shins.” To extend the admittedly limited functionality of NABi, the team then created ALPHRED (Autonomous Legged Personal Helper Robot with Enhanced Dynamics). ALPHRED’s limbs, as the team calls them (“not legs, not arms”), form to create multimodal locomotion, because of its multiple types of formations.

Building Stronger Humans

The BackX, LegX and ShoulderX from SuitX serve to minimize the stress we humans tend to place on our joints. We saw the application of these modules onstage. But infinitely more impressive during the conversation with company co-founder Homayoon Kazerooni was the application the audience saw of the company’s exoskeleton. Arash Bayatmakou fell from a balcony in 2012 which resulted in paralysis. He was told he would never walk again. Five years later, Arash connected with SuitX, and he has been working with a physical therapist to use the device to perform four functions: stand, sit and walk forward and backward. You can follow his recovery here.

A dozen googlers quit over Google’s military drone contract

Google's "Project Maven" is supplying machine-learning tools to the Pentagon to support drone strikes; the project has been hugely divisive within Google, with employees pointing out that the company is wildly profitable and doesn't need to compromise on its ethics to keep its doors open; that the drone program is a system of extrajudicial killing far from the battlefield; and that the firm's long-term health depends on its ability to win and retain the trust of users around the world, which will be harder if Google becomes a de facto wing of the US military. (more…)

Music payments startup Exactuals debuts r.ai, a “Palantir for music royalties”

Exactuals, a software service offering payments management for the music industry, is debuting r.ai, a new tool that it’s dubbed the “Palantir for music”. It’s a service that can track songwriting information and rights across different platforms to ensure attribution for music distributors.

As companies like Apple and Spotify demand better information from labels about the songs they’re pushing to streaming services, companies are scrambling to clean up their data and provide proper attribution.

According to Exactuals, that’s where the r.ai service comes in.

The company is tracking 59 million songs for their “Interested Party Identifiers” (IPIs), International Standard Work Codes (ISWCs), and International Standard Recording Codes (ISRCs) — all of which are vital to ensuring that songwriters and musicians are properly paid for their work every time a song is streamed, downloaded, covered, or viewed on a distribution platform.

Chris McMurtry, the head of music product at Exactuals explained it like this. In the music business, songwriters have the equivalent of a social security number which is attached to any song they write so they can receive credit and payment. That’s the ISI. Performers of songs have their own identifier, which is the ISWC. Then the song itself gets its own code, called the ISRC which is used to track a song as it’s performed by other artists through various covers, samples and remixes.

“There’s only one ISWC, but there might be 300 ISRCs,” says Exactuals chief executive, Mike Hurst.

Publishing technology companies will pay writers and performers based on these identifiers, but they’re struggling to identify and track all of the 700,000 disparate places where the data could be, says McMurtry. Hence the need for r.ai.

 

The technology is “an open api based on machine learning that matches disparate data sources to clean and enhance it so rights holders can get paid and attribution happens,” says McMurtry.

For publishers, Exactuals argues that r.ai is the best way to track rights across a huge catalog of music and for labels it’s an easy way to provide services like Apple and Spotify with the information they’re now demanding, Hurst said.

What do AI and blockchain mean for the rule of law?

Digital services have frequently been in collision — if not out-and-out conflict — with the rule of law. But what happens when technologies such as deep learning software and self-executing code are in the driving seat of legal decisions?

How can we be sure next-gen ‘legal tech’ systems are not unfairly biased against certain groups or individuals? And what skills will lawyers need to develop to be able to properly assess the quality of the justice flowing from data-driven decisions?

While entrepreneurs have been eyeing traditional legal processes for some years now, with a cost-cutting gleam in their eye and the word ‘streamline‘ on their lips, this early phase of legal innovation pales in significance beside the transformative potential of AI technologies that are already pushing their algorithmic fingers into legal processes — and perhaps shifting the line of the law itself in the process.

But how can legal protections be safeguarded if decisions are automated by algorithmic models trained on discrete data-sets — or flowing from policies administered by being embedded on a blockchain?

These are the sorts of questions that lawyer and philosopher Mireille Hildebrandt, a professor at the research group for Law, Science, Technology and Society at Vrije Universiteit Brussels in Belgium, will be engaging with during a five-year project to investigate the implications of what she terms ‘computational law’.

Last month the European Research Council awarded Hildebrandt a grant of 2.5 million to conduct foundational research with a dual technology focus: Artificial legal intelligence and legal applications of blockchain.

Discussing her research plan with TechCrunch, she describes the project as both very abstract and very practical, with a staff that will include both lawyers and computer scientists. She says her intention is to come up with a new legal hermeneutics — so, basically, a framework for lawyers to approach computational law architectures intelligently; to understand limitations and implications, and be able to ask the right questions to assess technologies that are increasingly being put to work assessing us.

“The idea is that the lawyers get together with the computer scientists to understand what they’re up against,” she explains. “I want to have that conversation… I want lawyers who are preferably analytically very sharp and philosophically interested to get together with the computer scientists and to really understand each other’s language.

“We’re not going to develop a common language. That’s not going to work, I’m convinced. But they must be able to understand what the meaning of a term is in the other discipline, and to learn to play around, and to say okay, to see the complexity in both fields, to shy away from trying to make it all very simple.

“And after seeing the complexity to then be able to explain it in a way that the people that really matter — that is us citizens — can make decisions both at a political level and in everyday life.”

Hildebrandt says she included both AI and blockchain technologies in the project’s remit as the two offer “two very different types of computational law”.

There is also of course the chance that the two will be applied in combination — creating “an entirely new set of risks and opportunities” in a legal tech setting.

Blockchain “freezes the future”, argues Hildebrandt, admitting of the two it’s the technology she’s more skeptical of in this context. “Once you’ve put it on a blockchain it’s very difficult to change your mind, and if these rules become self-reinforcing it would be a very costly affair both in terms of money but also in terms of effort, time, confusion and uncertainty if you would like to change that.

“You can do a fork but not, I think, when governments are involved. They can’t just fork.”

That said, she posits that blockchain could at some point in the future be deemed an attractive alternative mechanism for states and companies to settle on a less complex system to determine obligations under global tax law, for example. (Assuming any such accord could indeed be reached.)

Given how complex legal compliance can already be for Internet platforms operating across borders and intersecting with different jurisdictions and political expectations there may come a point when a new system for applying rules is deemed necessary — and putting policies on a blockchain could be one way to respond to all the chaotic overlap.

Though Hildebrandt is cautious about the idea of blockchain-based systems for legal compliance.

It’s the other area of focus for the project — AI legal intelligence — where she clearly sees major potential, though also of course risks too. “AI legal intelligence means you use machine learning to do argumentation mining — so you do natural language processing on a lot of legal texts and you try to detect lines of argumentation,” she explains, citing the example of needing to judge whether a specific person is a contractor or an employee.

“That has huge consequences in the US and in Canada, both for the employer… and for the employee and if they get it wrong the tax office may just walk in and give them an enormous fine plus claw back a lot of money which they may not have.”

As a consequence of confused case law in the area, academics at the University of Toronto developed an AI to try to help — by mining lots of related legal texts to generate a set of features within a specific situation that could be used to check whether a person is an employee or not.

“They’re basically looking for a mathematical function that connected input data — so lots of legal texts — with output data, in this case whether you are either an employee or a contractor. And if that mathematical function gets it right in your data set all the time or nearly all the time you call it high accuracy and then we test on new data or data that has been kept apart and you see whether it continues to be very accurate.”

Given AI’s reliance on data-sets to derive algorithmic models that are used to make automated judgement calls, lawyers are going to need to understand how to approach and interrogate these technology structures to determine whether an AI is legally sound or not.

High accuracy that’s not generated off of a biased data-set cannot just be a ‘nice to have’ if your AI is involved in making legal judgment calls on people.

“The technologies that are going to be used, or the legal tech that is now being invested in, will require lawyers to interpret the end results — so instead of saying ‘oh wow this has 98% accuracy and it outperforms the best lawyers!’ they should say ‘ah, ok, can you please show me the set of performance metrics that you tested on. Ah thank you, so why did you put these four into the drawer because they have low accuracy?… Can you show me your data-set? What happened in the hypothesis space? Why did you filter those arguments out?’

“This is a conversation that really requires lawyers to become interested, and to have a bit of fun. It’s a very serious business because legal decisions have a lot of impact on people’s lives but the idea is that lawyers should start having fun in interpreting the outcomes of artificial intelligence in law. And they should be able to have a serious conversation about the limitations of self-executing code — so the other part of the project [i.e. legal applications of blockchain tech].

“If somebody says ‘immutability’ they should be able to say that means that if after you have put everything in the blockchain you suddenly discover a mistake that mistake is automated and it will cost you an incredible amount of money and effort to get it repaired… Or ‘trustless’ — so you’re saying we should not trust the institutions but we should trust software that we don’t understand, we should trust all sorts of middlemen, i.e. the miners in permissionless, or the other types of middlemen who are in other types of distributed ledgers… ”

“I want lawyers to have ammunition there, to have solid arguments… to actually understand what bias means in machine learning,” she continues, pointing by way of an example to research that’s being done by the AI Now Institute in New York to investigate disparate impacts and treatments related to AI systems.

“That’s one specific problem but I think there are many more problems,” she adds of algorithmic discrimination. “So the purpose of this project is to really get together, to get to understand this.

“I think it’s extremely important for lawyers, not to become computer scientists or statisticians but to really get their finger behind what’s happening and then to be able to share that, to really contribute to legal method — which is text oriented. I’m all for text but we have to, sort of, make up our minds when we can afford to use non-text regulation. I would actually say that that’s not law.

“So how should be the balance between something that we can really understand, that is text, and these other methods that lawyers are not trained to understand… And also citizens do not understand.”

Hildebrandt does see opportunities for AI legal intelligence argument mining to be “used for the good” — saying, for example, AI could be applied to assess the calibre of the decisions made by a particular court.

Though she also cautions that huge thought would need to go into the design of any such systems.

“The stupid thing would be to just give the algorithm a lot of data and then train it and then say ‘hey yes that’s not fair, wow that’s not allowed’. But you could also really think deeply what sort of vectors you have to look at, how you have to label them. And then you may find out that — for instance — the court sentences much more strictly because the police is not bringing the simple cases to court but it’s a very good police and they talk with people, so if people have not done something really terrible they try to solve that problem in another way, not by using the law. And then this particular court gets only very heavy cases and therefore gives far more heavy sentences than other courts that get from their police or public prosecutor all life cases.

“To see that you should not only look at legal texts of course. You have to look also at data from the police. And if you don’t do that then you can have very high accuracy and a total nonsensical outcome that doesn’t tell you anything you didn’t already know. And if you do it another way you can sort of confront people with their own prejudices and make it interesting — challenge certain things. But in a way that doesn’t take too much for granted. And my idea would be that the only way this is going to work is to get a lot of different people together at the design stage of the system — so when you are deciding which data you’re going to train on, when you are developing what machine learners call your ‘hypothesis space’, so the type of modeling you’re going to try and do. And then of course you should test five, six, seven performance metrics.

“And this is also something that people should talk about — not just the data scientists but, for instance, lawyers but also the citizens who are going to be affected by what we do in law. And I’m absolutely convinced that if you do that in a smart way that you get much more robust applications. But then the incentive structure to do it that way is maybe not obvious. Because I think legal tech is going to be used to reduce costs.”

She says one of the key concepts of the research project is legal protection by design — opening up other interesting (and not a little alarming) questions such as what happens to the presumption of innocence in a world of AI-fueled ‘pre-crime’ detectors?

“How can you design these systems in such a way that they offer legal protection from the first minute they come to the market — and not as an add-on or a plug in. And that’s not just about data protection but also about non-discrimination of course and certain consumer rights,” she says.

“I always think that the presumption of innocence has to be connected with legal protection by design. So this is more on the side of the police and the intelligence services — how can you help the intelligence services and the police to buy or develop ICT that has certain constrains which makes it compliant with the presumption of innocence which is not easy at all because we probably have to reconfigure what is the presumption of innocence.”

And while the research is part abstract and solidly foundational, Hildebrandt points out that the technologies being examined — AI and blockchain — are already being applied in legal contexts, albeit in “a state of experimentation”.

And, well, this is one tech-fueled future that really must not be unevenly distributed. The risks are stark.   

“Both the EU and national governments have taken a liking to experimentation… and where experimentation stops and systems are really already implemented and impacting decisions about your and my life is not always so easy to see,” she adds.

Her other hope is that the interpretation methodology developed through the project will help lawyers and law firms to navigate the legal tech that’s coming at them as a sales pitch.

“There’s going to be, obviously, a lot of crap on the market,” she says. “That’s inevitable, this is going to be a competitive market for legal tech and there’s going to be good stuff, bad stuff, and it will not be easy to decide what’s good stuff and bad stuff — so I do believe that by taking this foundational perspective it will be more easy to know where you have to look if you want to make that judgement… It’s about a mindset and about an informed mindset on how these things matter.

“I’m all in favor of agile and lean computing. Don’t do things that make no sense… So I hope this will contribute to a competitive advantage for those who can skip methodologies that are basically nonsensical.”

Chinese law professor: AI will end capitalism

Feng Xiang is a prominent Chinese legal scholar with an appointment at Tsinghua University; in a new Washington Post editorial adapted from his recent speech at the Berggruen Institute’s China Center workshop on artificial intelligence in Beijing, he argues that capitalism is incompatible with AI. (more…)

See in the Dark: a machine learning technique for producing astoundingly sharp photos in very low light

https://www.youtube.com/watch?v=qWKUFK7MWvg

A group of scientists from Intel and the University of Illinois at Urbana–Champaign have published a paper called Learning to See in the Dark detailing a powerful machine-learning based image processing technique that allows regular cameras to take super-sharp pictures in very low light, without long exposures or the kinds of graininess associated with low-light photography. (more…)

8 big announcements from Google I/O 2018

Google kicked off its annual I/O developer conference at Shoreline Amphitheater in Mountain View, California. Here are some of the biggest announcements from the Day 1 keynote. There will be more to come over the next couple of days, so follow along on everything Google I/O on TechCrunch. 

Google goes all in on artificial intelligence, rebranding its research division to Google AI

Just before the keynote, Google announced it is rebranding its Google Research division to Google AI. The move signals how Google has increasingly focused R&D on computer vision, natural language processing, and neural networks.

Google makes talking to the Assistant more natural with “continued conversation”

What Google announced: Google announced a “continued conversation” update to Google Assistant that makes talking to the Assistant feel more natural. Now, instead of having to say “Hey Google” or “OK Google” every time you want to say a command, you’ll only have to do so the first time. The company also is adding a new feature that allows you to ask multiple questions within the same request. All this will roll out in the coming weeks.

Why it’s important: When you’re having a typical conversation, odds are you are asking follow-up questions if you didn’t get the answer you wanted. But it can be jarring to have to say “Hey Google” every single time, and it breaks the whole flow and makes the process feel pretty unnatural. If Google wants to be a significant player when it comes to voice interfaces, the actual interaction has to feel like a conversation — not just a series of queries.

Google Photos gets an AI boost

What Google announced: Google Photos already makes it easy for you to correct photos with built-in editing tools and AI-powered features for automatically creating collages, movies and stylized photos. Now, Photos is getting more AI-powered fixes like B&W photo colorization, brightness correction and suggested rotations. A new version of the Google Photos app will suggest quick fixes and tweaks like rotations, brightness corrections or adding pops of color.

Why it’s important: Google is working to become a hub for all of your photos, and it’s able to woo potential users by offering powerful tools to edit, sort, and modify those photos. Each additional photo Google gets offers it more data and helps them get better and better at image recognition, which in the end not only improves the user experience for Google, but also makes its own tools for its services better. Google, at its heart, is a search company — and it needs a lot of data to get visual search right.

Google Assistant and YouTube are coming to Smart Displays

What Google announced: Smart Displays were the talk of Google’s CES push this year, but we haven’t heard much about Google’s Echo Show competitor since. At I/O, we got a little more insight into the company’s smart display efforts. Google’s first Smart Displays will launch in July, and of course will be powered by Google Assistant and YouTube . It’s clear that the company’s invested some resources into building a visual-first version of Assistant, justifying the addition of a screen to the experience.

Why it’s important: Users are increasingly getting accustomed to the idea of some smart device sitting in their living room that will answer their questions. But Google is looking to create a system where a user can ask questions and then have an option to have some kind of visual display for actions that just can’t be resolved with a voice interface. Google Assistant handles the voice part of that equation — and having YouTube is a good service that goes alongside that.

Google Assistant is coming to Google Maps

What Google announced: Google Assistant is coming to Google Maps, available on iOS and Android this summer. The addition is meant to provide better recommendations to users. Google has long worked to make Maps seem more personalized, but since Maps is now about far more than just directions, the company is introducing new features to give you better recommendations for local places.

The maps integration also combines the camera, computer vision technology, and Google Maps with Street View. With the camera/Maps combination, it really looks like you’ve jumped inside Street View. Google Lens can do things like identify buildings, or even dog breeds, just by pointing your camera at the object in question. It will also be able to identify text.

Why it’s important: Maps is one of Google’s biggest and most important products. There’s a lot of excitement around augmented reality — you can point to phenomena like Pokémon Go — and companies are just starting to scratch the surface of the best use cases for it. Figuring out directions seems like such a natural use case for a camera, and while it was a bit of a technical feat, it gives Google yet another perk for its Maps users to keep them inside the service and not switch over to alternatives. Again, with Google, everything comes back to the data, and it’s able to capture more data if users stick around in its apps.

Google announces a new generation for its TPU machine learning hardware

What Google announced: As the war for creating customized AI hardware heats up, Google said that it is rolling out its third generation of silicon, the Tensor Processor Unit 3.0. Google CEO Sundar Pichai said the new TPU is 8x more powerful than last year per pod, with up to 100 petaflops in performance. Google joins pretty much every other major company in looking to create custom silicon in order to handle its machine operations.

Why it’s important: There’s a race to create the best machine learning tools for developers. Whether that’s at the framework level with tools like TensorFlow or PyTorch or at the actual hardware level, the company that’s able to lock developers into its ecosystem will have an advantage over the its competitors. It’s especially important as Google looks to build its cloud platform, GCP, into a massive business while going up against Amazon’s AWS and Microsoft Azure. Giving developers — who are already adopting TensorFlow en masse — a way to speed up their operations can help Google continue to woo them into Google’s ecosystem.

MOUNTAIN VIEW, CA – MAY 08: Google CEO Sundar Pichai delivers the keynote address at the Google I/O 2018 Conference at Shoreline Amphitheater on May 8, 2018 in Mountain View, California. Google’s two day developer conference runs through Wednesday May 9. (Photo by Justin Sullivan/Getty Images)

Google News gets an AI-powered redesign

What Google announced: Watch out, Facebook . Google is also planning to leverage AI in a revamped version of Google News. The AI-powered, redesigned news destination app will “allow users to keep up with the news they care about, understand the full story, and enjoy and support the publishers they trust.” It will leverage elements found in Google’s digital magazine app, Newsstand and YouTube, and introduces new features like “newscasts” and “full coverage” to help people get a summary or a more holistic view of a news story.

Why it’s important: Facebook’s main product is literally called “News Feed,” and it serves as a major source of information for a non-trivial portion of the planet. But Facebook is embroiled in a scandal over personal data of as many as 87 million users ending up in the hands of a political research firm, and there are a lot of questions over Facebook’s algorithms and whether they surface up legitimate information. That’s a huge hole that Google could exploit by offering a better news product and, once again, lock users into its ecosystem.

Google unveils ML Kit, an SDK that makes it easy to add AI smarts to iOS and Android apps

What Google announced: Google unveiled ML Kit, a new software development kit for app developers on iOS and Android that allows them to integrate pre-built, Google-provided machine learning models into apps. The models support text recognition, face detection, barcode scanning, image labeling and landmark recognition.

Why it’s important: Machine learning tools have enabled a new wave of use cases that include use cases built on top of image recognition or speech detection. But even though frameworks like TensorFlow have made it easier to build applications that tap those tools, it can still take a high level of expertise to get them off the ground and running. Developers often figure out the best use cases for new tools and devices, and development kits like ML Kit help lower the barrier to entry and give developers without a ton of expertise in machine learning a playground to start figuring out interesting use cases for those appliocations.

So when will you be able to actually play with all these new features? The Android P beta is available today, and you can find the upgrade here.