FreshRSS

🔒
❌ About FreshRSS
There are new available articles, click to refresh the page.
Yesterday — January 20th 2020Your RSS feeds

Google’s Sundar Pichai doesn’t want you to be clear-eyed about AI’s dangers

By Natasha Lomas

Alphabet and Google CEO, Sundar Pichai, is the latest tech giant kingpin to make a public call for AI to be regulated while simultaneously encouraging lawmakers towards a dilute enabling framework that does not put any hard limits on what can be done with AI technologies.

In an op-ed published in today’s Financial Times, Pichai makes a headline-grabbing call for artificial intelligence to be regulated. But his pitch injects a suggestive undercurrent that puffs up the risk for humanity of not letting technologists get on with business as usual and apply AI at population-scale — with the Google chief claiming: “AI has the potential to improve billions of lives, and the biggest risk may be failing to do so” — thereby seeking to frame ‘no hard limits’ as actually the safest option for humanity.

Simultaneously the pitch downplays any negatives that might cloud the greater good that Pichai implies AI will unlock — presenting “potential negative consequences” as simply the inevitable and necessary price of technological progress.

It’s all about managing the level of risk, is the leading suggestion, rather than questioning outright whether the use of a hugely risk-laden technology such as facial recognition should actually be viable in a democratic society.

“Internal combustion engines allowed people to travel beyond their own areas but also caused more accidents,” Pichai writes, raiding history for a self-serving example while ignoring the vast climate costs of combustion engines (and the resulting threat now posed to the survival of countless species on Earth).

“The internet made it possible to connect with anyone and get information from anywhere, but also easier for misinformation to spread,” he goes on. “These lessons teach us that we need to be clear-eyed about what could go wrong.”

For “clear-eyed” read: Accepting of the technology-industry’s interpretation of ‘collateral damage’. (Which, in the case of misinformation and Facebook, appears to run to feeding democracy itself into the ad-targeting meat-grinder.)

Meanwhile, not at all mentioned in Pichai’s discussion of AI risks: The concentration of monopoly power that artificial intelligence appears to be very good at supercharging.

Funny that.

Of course it’s hardly surprising a tech giant that, in recent years, rebranded an entire research division to ‘Google AI’ — and has previously been called out by some of its own workforce over a project involving applying AI to military weapons technology — should be lobbying lawmakers to set AI ‘limits’ that are as dilute and abstract as possible.

The only thing that’s better than zero regulation are laws made by useful idiots who’ve fallen hook, line and sinker for industry-expounded false dichotomies — such as those claiming it’s ‘innovation or privacy’.

Pichai’s intervention also comes at a strategic moment, with US lawmakers eyeing AI regulation and the White House seemingly throwing itself into alignment with tech giants’ desires for ‘innovation-friendly’ rules which make their business easier. (To wit: This month White House CTO Michael Kratsios warned in a Bloomberg op-ed against “preemptive, burdensome or duplicative rules that would needlessly hamper AI innovation and growth”.)

The new European Commission, meanwhile, has been sounding a firmer line on both AI and big tech.

It has made tech-driven change a key policy priority, with president Ursula von der Leyen making public noises about reining in tech giants. She has also committed to publish “a coordinated European approach on the human and ethical implications of Artificial Intelligence” within her first 100 days in office. (She took up the post on December 1, 2019 so the clock is ticking.)

Last week a leaked draft of the Commission proposals for pan-EU AI regulation suggest it’s leaning towards a relatively light touch approach (albeit, the European version of light touch is considerably more involved and interventionist than anything born in a Trump White House, clearly) — although the paper does float the idea of a temporary ban on the use of facial recognition technology in public places.

The paper notes that such a ban would “safeguard the rights of individuals, in particular against any possible abuse of the technology” — before arguing against such a “far-reaching measure that might hamper the development and uptake of this technology”, in favor of relying on provisions in existing EU law (such as the EU data protection framework, GDPR), in addition to relevant tweaks to current product safety and liability laws.

While it’s not yet clear which way the Commission will jump on regulating AI, even the lightish-touch version its considering would likely be a lot more onerous than Pichai would like.

In the op-ed he calls for what he couches as “sensible regulation” — aka taking a “proportionate approach, balancing potential harms, especially in high-risk areas, with social opportunities”.

For “social opportunities” read: The plentiful ‘business opportunities’ Google is spying — assuming the hoped for vast additional revenue scale it can get by supercharging expansion of AI-powered services into all sorts of industries and sectors (from health to transportation to everywhere else in between) isn’t derailed by hard legal limits on where AI can actually be applied.

“Regulation can provide broad guidance while allowing for tailored implementation in different sectors,” Pichai urges, setting out a preference for enabling “principles” and post-application “reviews”, to keep the AI spice flowing.

The op-ed only touches very briefly on facial recognition — despite the FT editors choosing to illustrate it with an image of the tech. Here Pichai again seeks to reframe the debate around what is, by nature, an extremely rights-hostile technology — talking only in passing of “nefarious uses” of facial recognition.

Of course this wilfully obfuscates the inherent risks of letting blackbox machines make algorithmic guesses at identity every time a face happens to pass through a public space.

You can’t hope to protect people’s privacy in such a scenario. Many other rights are also at risk, depending on what else the technology is being used for. So, really, any use of facial recognition is laden with individual and societal risk.

But Pichai is seeking to put blinkers on lawmakers. He doesn’t want them to see inherent risks baked into such a potent and powerful technology — pushing them towards only a narrow, ill-intended subset of “nefarious” and “negative” AI uses and “consequences” as being worthy of “real concerns”. 

And so he returns to banging the drum for “a principled and regulated approach to applying AI” [emphasis ours] — putting the emphasis on regulation that, above all, gives the green light for AI to be applied.

What technologists fear most here is rules that tell them when artificial intelligence absolutely cannot apply.

Ethics and principles are, to a degree, mutable concepts — and ones which the tech giants have become very practiced at claiming as their own, for PR purposes, including by attaching self-styled ‘guard-rails’ to their own AI operations. (But of course there’s no actual legal binds there.)

At the same time data-mining giants like Google are very smooth operators when it comes to gaming existing EU rules around data protection, such as by infesting their user-interfaces with confusing dark patterns that push people to click or swipe their rights away.

But a ban on applying certain types of AI would change the rules of the game. Because it would put society in the driving seat.

Laws that contained at least a moratorium on certain “dangerous” applications of AI — such as facial recognition technology, or autonomous weapons like the drone-based system Google was previously working on — have been called for by some far-sighted regulators.

And a ban would be far harder for platform giants to simply bend to their will.

So for a while I was willing to buy into the whole tech ethics thing but now I’m fully on the side of tech refusal. We need to be teaching refusal.

— Jonathan Senchyne (@jsench) January 16, 2020

Before yesterdayYour RSS feeds

Shadows’ Dylan Flinn and Kombo’s Kevin Gould on the business of ‘virtual influencers’

By Eric Peckham

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 2 of 3: the business of virtual influencers

Today’s discussion focuses on virtual influencers: fictional characters that build and engage followings of real people over social media. To explore the topic, I spoke with two experienced entrepreneurs:

  • Dylan Flinn is CEO of Shadows, an LA-based animation studio that’s building a roster of interactive characters for social media audiences. Dylan started his career in VC, funding companies such as Robinhood, Patreon and Bustle, and also spent two years as an agent at CAA.
  • Kevin Gould is CEO of Kombo Ventures, a talent management and brand incubation firm that has guided the careers of top influencers like Jake Paul and SSSniperWolf. He is the co-founder of three direct-to-consumer brands — Insert Name Here, Wakeheart and Glamnetic — and is an angel investor in companies like Clutter, Beautycon and DraftKings.

Compound’s Mike Dempsey on virtual influencers and AI characters

By Eric Peckham

In films, TV shows and books — and even in video games where characters are designed to respond to user behavior — we don’t perceive characters as beings with whom we can establish two-way relationships. But that’s poised to change, at least in some use cases.

Interactive characters — fictional, virtual personas capable of personalized interactions — are defining new territory in entertainment. In my guide to the concept of “virtual beings,” I outlined two categories of these characters:

  • virtual influencers: fictional characters with real-world social media accounts who build and engage with a mass following of fans.
  • virtual companions: AIs oriented toward one-to-one relationships, much like the tech depicted in the films “Her” and “Ex Machina.” They are personalized enough to engage us in entertaining discussions and respond to our behavior (in the physical world or within games) like a human would.

Part 1 of 3: the investor perspective

In a series of three interviews, I’m exploring the startup opportunities in both of these spaces in greater depth. First, Michael Dempsey, a partner at VC firm Compound who has blogged extensively about digital characters, avatars and animation, offers his perspective as an investor hunting for startup opportunities within these spaces.

Apple's Latest Deal Shows How AI Is Moving Right Onto Devices

By Will Knight
The iPhone maker's purchase of startup Xnor.ai is the latest move toward a trend of computing on the "edge," rather than in the cloud. 

Artist Refik Anadol Turns Data Into Art, With Help From AI

By Tom Simonite
He sees pools of data as raw material for visualizations that he calls a new kind of “sculpture.”

EU lawmakers are eyeing risk-based rules for AI, per leaked white paper

By Natasha Lomas

The European Commission is considering a temporary ban on the use of facial recognition technology, according to a draft proposal for regulating artificial intelligence obtained by Euroactiv.

Creating rules to ensure AI is ‘trustworthy and human’ has been an early flagship policy promise of the new Commission, led by president Ursula von der Leyen.

But the leaked proposal suggests the EU’s executive body is in fact leaning towards tweaks of existing rules and sector/app specific risk-assessments and requirements, rather than anything as firm as blanket sectoral requirements or bans.

The leaked Commission white paper floats the idea of a three-to-five-year period in which the use of facial recognition technology could be prohibited in public places — to give EU lawmakers time to devise ways to assess and manage risks around the use of the technology, such as to people’s privacy rights or the risk of discriminatory impacts from biased algorithms.

“This would safeguard the rights of individuals, in particular against any possible abuse of the technology,” the Commission writes, adding that: “It would be necessary to foresee some exceptions, notably for activities in the context of research and development and for security purposes.”

However the text raises immediate concerns about imposing even a time-limited ban — which is described as “a far-reaching measure that might hamper the development and uptake of this technology” — and the Commission goes on to state that its preference “at this stage” is to rely on existing EU data protection rules, aka the General Data Protection Regulation (GDPR).

The white paper contains a number of options the Commission is still considering for regulating the use of artificial intelligence more generally.

These range from voluntary labelling; to imposing sectorial requirements for the public sector (including on the use of facial recognition tech); to mandatory risk-based requirements for “high-risk” applications (such as within risky sectors like healthcare, transport, policing and the judiciary, as well as for applications which can “produce legal effects for the individual or the legal entity or pose risk of injury, death or significant material damage”); to targeted amendments to existing EU product safety and liability legislation.

The proposal also emphasizes the need for an oversight governance regime to ensure rules are followed — though the Commission suggests leaving it open to Member States to choose whether to rely on existing governance bodies for this task or create new ones dedicated to regulating AI.

Per the draft white paper, the Commission says its preference for regulating AI are options 3 combined with 4 & 5: Aka mandatory risk-based requirements on developers (of whatever sub-set of AI apps are deemed “high-risk”) that could result in some “mandatory criteria”, combined with relevant tweaks to existing product safety and liability legislation, and an overarching governance framework.

Hence it appears to be leaning towards a relatively light-touch approach, focused on “building on existing EU legislation” and creating app-specific rules for a sub-set of “high-risk” AI apps/uses — and which likely won’t stretch to even a temporary ban on facial recognition technology.

Much of the white paper is also take up with discussion of strategies about “supporting the development and uptake of AI” and “facilitating access to data”.

“This risk-based approach would focus on areas where the public is at risk or an important legal interest is at stake,” the Commission writes. “This strictly targeted approach would not add any new additional administrative burden on applications that are deemed ‘low-risk’.”

EU commissioner Thierry Breton, who oversees the internal market portfolio, expressed resistance to creating rules for artificial intelligence last year — telling the EU parliament then that he “won’t be the voice of regulating AI“.

For “low-risk” AI apps, the white paper notes that provisions in the GDPR which give individuals the right to receive information about automated processing and profiling, and set a requirement to carry out a data protection impact assessment, would apply.

Albeit the regulation only defines limited rights and restrictions over automated processing — in instances where there’s a legal or similarly significant effect on the people involved. So it’s not clear how extensively it would in fact apply to “low-risk” apps.

If it’s the Commission’s intention to also rely on GDPR to regulate higher risk stuff — such as, for example, police forces’ use of facial recognition tech — instead of creating a more explicit sectoral framework to restrict their use of a highly privacy-hostile AI technologies — it could exacerbate an already confusingly legislative picture where law enforcement is concerned, according to Dr Michael Veale, a lecturer in digital rights and regulation at UCL.

“The situation is extremely unclear in the area of law enforcement, and particularly the use of public private partnerships in law enforcement. I would argue the GDPR in practice forbids facial recognition by private companies in a surveillance context without member states actively legislating an exemption into the law using their powers to derogate. However, the merchants of doubt at facial recognition firms wish to sow heavy uncertainty into that area of law to legitimise their businesses,” he told TechCrunch.

“As a result, extra clarity would be extremely welcome,” Veale added. “The issue isn’t restricted to facial recognition however: Any type of biometric monitoring, such a voice or gait recognition, should be covered by any ban, because in practice they have the same effect on individuals.”

An advisory body set up to advise the Commission on AI policy set out a number of recommendations in a report last year — including suggesting a ban on the use of AI for mass surveillance and social credit scoring systems of citizens.

But its recommendations were criticized by privacy and rights experts for falling short by failing to grasp wider societal power imbalances and structural inequality issues which AI risks exacerbating — including by supercharging existing rights-eroding business models.

In a paper last year Veale dubbed the advisory body’s work a “missed opportunity” — writing that the group “largely ignore infrastructure and power, which should be one of, if not the most, central concern around the regulation and governance of data, optimisation and ‘artificial intelligence’ in Europe going forwards”.

TRI-AD’s James Kuffner and Max Bajracharya are coming to TC Sessions: Robotics+AI 2020

By Brian Heater

With the Tokyo Summer Olympics rapidly approaching, 2020 is shaping up to be a big year for TRI-AD (Toyota Research Institute – Advanced Development). Opened in 2018, the research wing is devoted to bringing some of TRI’s work into practice. The organization is heavily invested in both autonomous driving and other key robotics project.

TRI-AD’s CEO James Kuffner and VP of Robotics Max Bajracharya will be joining us on stage at TC Sessions Robotics+AI on March 3 at U.C. Berkeley to discuss their work in the field. The company has been working to promote accessibility, both in terms of its work in automotive and smart cities, as well as robotics aimed to help assist Japan’s aging population.

The Summer Olympics will serve as an opportunity for TRI-AD to showcase those technologies in practice. Kuffner and Bajracharya will discuss why companies like Toyota are investing in robotics and working to make every day robotics a reality.

Early Bird tickets are now on sale for $275. Book your tickets now and save $150 before prices go up!

Student Tickets are just $50 – grab yours here.

Startups, book a demo exhibitor table and get 4 tickets to the show and a demo area to showcase your company. Packages run $2200.

Allen Institute for AI’s Incubator expands with $10M fund from high-profile VCs

By Devin Coldewey

The Allen Institute for AI (AI2) started its incubator up two years ago, helping launch companies like Xnor.ai, Blue Canoe, and WellSaidLabs. Their success has attracted funding from not just local Seattle VC outfit Madrona, but Sequoia, Kleiner Perkins, and Two Sigma as well, resulting in a new $10M fund that should help keep the lights on.

The AI2 Incubator, led by Jacob Colker since its inception in 2017, has focused on launching a handful of companies every year that in some way leverage a serious AI advantage. Blue Canoe, for instance, does natural language processing with a focus on accent modification; Xnor.ai is working on ultra-low-power implementations of machine learning algorithms, and was just acquired yesterday by Apple for a reported $200M.

“We think the next generation of so called AI-first companies are going to have to graduate into building long term, successful businesses that start with an AI edge,” said the program’s new managing director, Bryan Hale. “And the people who can help do this are the ones who have helped build iconic companies.”

Hence the involvement of household names (in the startup community anyhow) Sequoia and Kleiner Perkins, and Two Sigma from New York. Seattle-based Madrona also recently invested in AI2 company Lexion. It’s a pretty solid crowd to be running with, and as Colker pointed out, “they don’t often come together.”

“But also, they looked up into the northwest and said, what’s going on up there?” added Hale. Indeed, Seattle has over the last few years blossomed into a haven for AI research, with many major tech companies establishing or expanding satellite offices here at least partly concerned with the topic: Apple, Google, Nvidia, and Facebook among others, and of course local standbys Amazon, Microsoft, and Adobe.

Practically speaking the new fund will let the incubator continue on its current path, but with a bit more runway and potentially bigger investments in the startups it works with.

“We just have a lot more resources now to help our companies succeed,” said Colker. “Previously we were able to write up to about a $250,000 check, but now we can write up to maybe $800,000 per company. That means they have a lot more time to build out their team, aggregate training data, test their models, all these points that are important for a team to raise a bigger, better VC funding round.”

AI2 prides itself on its large staff of PhDs and open research strategy, publishing pretty much everything publicly in order to spur the field onwards. Access to these big brains, many of which have bred successful startups of their own, is no less a draw than the possibility of more general business mentorship and funding.

Colker said the incubator will continue to produce 3-5 startups per year, each one taking “about 12-18 months, from whiteboard to venture funding.” AI, he pointed out, often needs more time than a consumer app or even enterprise play, since it’s as much research as it is development. But so far the model seems to work quite well.

“There are very few places in the world where an entrepreneur can come to take advantage of the brain power of a hundred PhDs and support staff. We’ve got a new research center with 70 desks, we’ve got plenty of space for those teams to grow,” he said. “We’re incredibly well positioned to support the next wave of AI companies.”

Bolt raises €50M in venture debt from the EU to expand its ride-hailing business

By Ingrid Lunden

Bolt, the billion-dollar startup out of Estonia that’s building a ride-hailing, scooter and food delivery business across Europe and Africa, has picked up a tranche of funding in its bid to take on Uber and the rest in the world of on-demand transportation.

The company has picked up €50 million (about $56 million) from the European Investment Bank to continue developing its technology and safety features, as well as to expand newer areas of its business, such as food delivery and personal transport like e-scooters.

With this latest money, Bolt has raised more than €250 million in funding since opening for business in 2013, and as of its last equity round in July 2019 (when it raised $67 million), it was valued at over $1 billion, which Bolt has confirmed to me remains the valuation here.

Bolt further said that its service now has more than 30 million users in 150 cities and 35 countries and is profitable in two-thirds of its markets.

The timing of the last equity round, and the company’s ambitious growth plans, could well mean it will be raising more equity funding again soon. Bolt’s existing backers include the Chinese ride-hailing giant Didi, Creandum, G Squared and Daimler (which owns a ride-hailing competitor, Free Now — formerly called MyTaxi).

“Bolt is a good example of European excellence in tech and innovation. As you say, to stand still is to go backwards, and Bolt is never standing still,” said EIB’s vice president, Alexander Stubb, in a statement. “The Bank is very happy to support the company in improving its services, as well as allowing it to branch out into new service fields. In other words, we’re fully on board!”

The EIB is the nonprofit, long-term lending arm of the European Union, and this financing in the form of a quasi-equity facility.

Also known as venture debt, the financing is structured as a loan, where repayment terms are based on a percentage of future revenue streams, and ownership is not diluted. The funding is backed in turn by the European Fund for Strategic Investments, as part of a bigger strategy to boost investment in promising companies, and specifically riskier startups, in the tech industry. It expects to make and spur some €458.8 billion in investments across 1 million startups and SMEs as part of this plan.

Opting for a “quasi-equity” loan instead of a straight equity or debt investment is attractive to Bolt for a couple of reasons. One is the fact that the funding comes without ownership dilution. Two is the endorsement and support of the EU itself, in a market category where tech disruptors have been known to run afoul of regulators and lawmakers, in part because of the ubiquity and nature of the transportation/mobility industry.

“Mobility is one of the areas where Europe will really benefit from a local champion who shares the values of European consumers and regulators,” said Martin Villig, the co-founder of Bolt (whose brother Markus is the CEO), in a statement. “Therefore, we are thrilled to have the European Investment Bank join the ranks of Bolt’s backers as this enables us to move faster towards serving many more people in Europe.”

(Butting heads with authorities is something that Bolt is no stranger to: It tried to enter the lucrative London taxi market through a backdoor to bypass the waiting time to get a license. It really didn’t work, and the company had to wait another 21 months to come to London doing it by the book. In its first six months of operation in London, the company has picked up 1.5 million customers.)

While private VCs account for the majority of startup funding, backing from government groups is an interesting and strategic route for tech companies that are making waves in large industries that sit adjacent to technology. Before it was acquired by PayPal, IZettle also picked up a round of funding from the EIB specifically to invest in its AI R&D. Navya, the self-driving bus and shuttle startup, has also raised money from the EIB in the past, as has MariaDB.

One of the big issues with on-demand transportation companies has been their safety record, a huge area of focus given the potential scale and ubiquity of a transportation or mobility service. Indeed, this is at the center of Uber’s latest scuffle in Europe, where London’s transport regulator has rejected a license renewal for the company over concerns about Uber’s safety record. (Uber is appealing; while it does, it’s business as usual.)

So it’s no surprise that with this funding, Bolt says that it will be specifically using the money to develop technology to “improve the safety, reliability and sustainability of its services while maintaining the high efficiency of the company’s operations.”

Bolt is one of a group of companies that have been hatched out of Estonia, which has worked to position itself as a leader in Europe’s tech industry as part of its own economic regeneration in the decades after existing as part of the Soviet Union (it formally left in 1990). The EIB has invested around €830 million in Estonian projects in the last five years.

“Estonia is as the forefront of digital transformation in Europe,” said Paolo Gentiloni, European Commissioner for the Economy, in a statement. “I am proud that Europe, through the Investment Plan, supports Estonian platform Bolt’s research and development strategy to create innovative and safe services that will enhance urban mobility.”

Apple buys edge-based AI startup Xnor.ai for a reported $200M

By Devin Coldewey

Xnor.ai, spun off in 2017 from the nonprofit Allen Institute for AI (AI2), has been acquired by Apple for about $200 million. A source close to the company corroborated a report this morning from GeekWire to that effect.

Apple confirmed the reports with its standard statement for this sort of quiet acquisition: “Apple buys smaller technology companies from time to time and we generally do not discuss our purpose or plans.” (I’ve asked for clarification just in case.)

Xnor.ai began as a process for making machine learning algorithms highly efficient — so efficient that they could run on even the lowest tier of hardware out there, things like embedded electronics in security cameras that use only a modicum of power. Yet using Xnor’s algorithms they could accomplish tasks like object recognition, which in other circumstances might require a powerful processor or connection to the cloud.

CEO Ali Farhadi and his founding team put the company together at AI2 and spun it out just before the organization formally launched its incubator program. It raised $2.7M in early 2017 and $12M in 2018, both rounds led by Seattle’s Madrona Venture Group, and has steadily grown its local operations and areas of business.

The $200M acquisition price is only approximate, the source indicated, but even if the final number were less by half that would be a big return for Madrona and other investors.

The company will likely move to Apple’s Seattle offices; GeekWire, visiting the Xnor.ai offices (in inclement weather, no less), reported that a move was clearly underway. AI2 confirmed that Farhadi is no longer working there, but he will retain his faculty position at the University of Washington.

An acquisition by Apple makes perfect sense when one thinks of how that company has been directing its efforts towards edge computing. With a chip dedicated to executing machine learning workflows in a variety of situations, Apple clearly intends for its devices to operate independent of the cloud for such tasks as facial recognition, natural language processing, and augmented reality. It’s as much for performance as privacy purposes.

Its camera software especially makes extensive use of machine learning algorithms for both capturing and processing images, a compute-heavy task that could potentially be made much lighter with the inclusion of Xnor’s economizing techniques. The future of photography is code, after all — so the more of it you can execute, and the less time and power it takes to do so, the better.

 

It could also indicate new forays in the smart home, toward which with HomePod Apple has made some tentative steps. But Xnor’s technology is highly adaptable and as such rather difficult to predict as far as what it enables for such a vast company as Apple.

Save over $200 with discounted student tickets to Robotics + AI 2020

By Emma Comeau

If you’re a current student and you love robots — and the AI that drives them — you do not want to miss out on TC Sessions: Robotics + AI 2020. Our day-long deep dive into these two life-altering technologies takes place on March 3 at UC Berkeley and features the best and brightest minds, makers and influencers.

We’ve set aside a limited number of deeply discounted tickets for students because, let’s face it, the future of robotics and AI can’t happen without cultivating the next generation. Tickets cost $50, which means you save more than $200. Reserve your student ticket now.

Not a student? No problem, we have a savings deal for you, too. If you register now, you’ll save $150 when you book an early-bird ticket by February 14.

More than 1,000 robotics and AI enthusiasts, experts and visionaries attended last year’s event, and we expect even more this year. Talk about a targeted audience and the perfect place for students to network for an internship, employment or even a future co-founder.

What can you expect this year? For starters, we have an outstanding lineup of speaker and demos — more than 20 presentations — on tap. Let’s take a quick look at just some of the offerings you don’t want to miss:

  • Saving Humanity from AI: Stuart Russell, UC Berkeley professor and AI authority, argues in his acclaimed new book, “Human Compatible,” that AI will doom humanity unless technologists fundamentally reform how they build AI algorithms.
  • Opening the Black Box with Explainable AI: Machine learning and AI models can be found in nearly every aspect of society today, but their inner workings are often as much a mystery to their creators as to those who use them. UC Berkeley’s Trevor Darrell, Krishna Gade of Fiddler Labs and Karen Myers from SRI International will discuss what we’re doing about it and what still needs to be done.
  • Engineering for the Red Planet: Maxar Technologies has been involved with U.S. space efforts for decades and is about to send its fifth robotic arm to Mars aboard NASA’s Mars 2020 rover. Lucy Condakchian, general manager of robotics at Maxar, will speak to the difficulty and exhilaration of designing robotics for use in the harsh environments of space and other planets.

That’s just a sample — take a gander at the event agenda to help you plan your time accordingly. We’ll add even more speakers in the coming weeks, so keep checking back.

TC Sessions: Robotics + AI 2020 takes place on March 3 at UC Berkeley. It’s a full day focused on exploring the future of robotics and a great opportunity for students to connect with leading technologists, founders, researchers and investors. Join us in Berkeley. Buy your student ticket today and get ready to build the future.

Is your company interested in sponsoring or exhibiting at TC Sessions: Robotics + AI 2020? Contact our sponsorship sales team by filling out this form.

Accel-backed Clockwise launches an AI assistant for Google Calendar

By Lucas Matney

Startups are paying for more subscription services than ever to drive collaboration during working hours, but — whether or not the Slack-lash is indeed a real thing — the truth is that filling your day with meetings can sometimes be detrimental to actually… working.

Time management software and daily planners put the accountability on the individual, but when you’re in several hours of meetings per day, there’s a lot that’s out of your control. I recently met with Matt Martin, the CEO of Accel-backed Clockwise. His startup has a really interesting pitch for taking a look at individual employee schedules through the lens of the entire team and moving meetings around to maximize “focus time,” which Martin defines as blocks of at least two uninterrupted hours during your day.

Clockwise’s customers already include Lyft, Asana, Strava and Twitter; they’ve been aiming to build out a wide footprint of customers by offering their product for free at first. They’ve raised more than $13 million over two rounds from investors including Accel, Greylock and Slack Fund.

The startup’s software, which integrates with Google Calendar, has been bringing people into the fold for shifting these meetings around, but their latest update aims to give teams the option to let its Clockwise Calendar Assistant do some of the heavy lifting automatically.

Managing calendars en masse obviously has the potential to piss people off. Clockwise has tried to build in certain accommodations to keep friction low, and they’ve gotten good feedback from early testers.

Certain employees, like engineers, likely benefit from more uninterrupted time to work, so Clockwise gave employees a way to designate how much “focus time” they generally need per week. They’ve also added the ability to bring personal calendars into the mix so that users can designate time when they have unmovable personal conflicts. Not every meeting is in your office; when there are locations in the invites, Clockwise will account for travel between the two addresses inside your calendar.

Some meetings can’t be moved, others rely on off-site folks in different time zones, sometimes a high-level exec needs to be in a meeting and their schedule is all that matters. Not all meetings need to be flexible, but Clockwise hopes that by automatically resolving conflicts for team meetings, they can leave employees with fewer useless half-hour chunks of time during their day.

Alongside today’s Assistant update, Clockwise is also boosting its compatibility with Slack. Users have the ability to let Clockwise turn on do-not-disturb automatically during designated “focus time” and can let the app populate their Slack status with the current meeting they’re in.

When you think about how much energy has been spent by startups looking to reinvent email or chat, it’s fascinating that there hasn’t been more energy fixed on the humble calendar. Anecdotally, there seems to be plenty of demand for a “luxury” Google Calendar, and yet there hasn’t seemed to be a proportional amount of action. Clockwise has one of the more interesting offerings, though I’m sure more will be popping up alongside it soon.

Verbit raises $31M Series B to expand its transcription and captioning service

By Frederic Lardinois

Verbit, a Tel Aviv- and New York-based startup that provides AI-assisted transcription and captioning services to professional users, today announced that it has raised a $31 million Series B round led by growth equity firm Stripes. Existing investors Viola Ventures, Vertex Ventures, HV Ventures, Oryzn Capital and ClalTech are also participating in the round, which brings the company’s total funding, which includes a $23 million Series A round in 2019, to $65 million.

The three-year-old company plans to use the new funding to expand to new verticals and add new languages. Currently, its focus is on the media and legal industries, as well as educational institutions. In total, the company currently has more than 150 customers that including Harvard, Stanford and Coursera . Verbit also plans to double its headcount in 2020.

Verbit’s AI-based tools get to about 90 percent accuracy but it also works with about fifteen thousand human transcribers who make revisions as necessary in order to get to 99 percent accuracy. As with virtually all machine-learning systems, those changes then flow back into the system in order to improve its accuracy.

Recently, the company also launched its real-time transcription service and opened its New York office.

“When I established Verbit three years ago, I didn’t anticipate we would become one of the market-leading companies in our industry so quickly,” said Tom Livne, CEO and co-founder of Verbit. “This latest financing round is an important milestone in Verbit’s journey and strengthens the incredible momentum we had in 2019. The collaboration with Stripes is a great indicator of Verbit’s category-leading product and will allow us to continue innovating in the market.”

Can a Digital Avatar Fire You?

By John Brandon
Samsung’s new artificial humans look, blink, and smile like us. But bots still shouldn't deal with complex human emotions.

Now Stores Must Tell You How They're Tracking Your Every Move

By Tom Simonite
California's new privacy law has spurred a torrent of online notices. But the law is also forcing changes offline, in traditional stores.

The crypto rich find security in Anchorage

By Josh Constine

Not the city, the $57 million-funded cryptocurrency custodian startup. When someone wants to keep tens or hundreds of millions of dollars in Bitcoin, Ethereum, or other coins safe, they put them in Anchorage’s vault. And now they can trade straight from custody so they never have to worry about getting robbed mid-transaction.

With backing from Visa, Andreessen Horowitz, and Blockchain Capital, Anchorage has emerged as the darling of the cryptocurrency security startup scene. Today it’s flexing its muscle and war chest by announcing its first acquisition, crypto risk modeling company Merkle Data.

Anchorage Security

Anchorage founders

Anchorage has already integrated Merkle’s technology and team to power today’s launch of its new trading feature. It eliminates the need for big crypto owners to manually move assets in and out of custody to buy or sell, or to set up their own in-house trading. Instead of grabbing some undisclosed spread between the spot price and the price Anchorage quotes its clients, it charges a transparent per transaction fee of a tenth of a percent.

It’s stressful enough trading around digital fortunes. Anchorage gives institutions and token moguls peace of mind throughout the process while letting them stake and vote while their riches are in custody. Anchorage CEO Nathan McCauley tells me “Our clients want to be able to fund a bank account with USD and have it seamlessly converted into crypto, securely held in their custody accounts. Shockingly, that’s not yet the norm–but we’re changing that.”

Buy and sell safely

Founded in 2017 by leaders behind Docker and Square, Anchorage’s core business is its omnimetric security system that takes passwords that can be lost or stolen out of the equation. Instead, it uses humans and AI to review scans of your biometrics, nearby networks, and other data for identity confirmation. Then it requires consensus approval for transactions from a set of trusted managers you’ve whitelisted.

With Anchorage Trading, the startup promises efficient order routing, transparent pricing, and multi-venue liquidity from OTC desks, exchanges, and market makers. “Because trading and custody are directly integrated, we’re able to buy and sell crypto from custody, without having to make risky external transfers or deal with multiple accounts from different providers” says Bart Stephens, founder and managing partner of Blockchain Capital.

Trading isn’t Anchorage’s primary business, so it doesn’t have to squeeze clients on their transactions and can instead try to keep them happy for the long-term. That also sets up Anchorage to be foundational part of the cryptocurrency stack. It wouldn’t disclose the terms of the Merkle Data acquisition, but the Pantera Capital-backed company brings quantative analysts to Anchorage to keep its trading safe and smart.

“Unlike most traditional financial assets, crypto assets are bearer assets: in order to do anything with them, you need to hold the underlying private keys. This means crypto custodians like Anchorage must play a much larger role than custodians do in traditional finance” says McCauley. “Services like trading, settlement, posting collateral, lending, and all other financial activities surrounding the assets rely on the custodian’s involvement, and in our view are best performed by the custodian directly.”

Anchorage will be competing with Coinbase, which offers integrated custody and institutional brokerage through its agency-only OTC desk. Fidelity Digital Assets combines trading and brokerage, but for Bitcoin only. BitGo offers brokerage from custody through a partnership with Genesis Global Trading. But Anchorage hopes its experience handling huge sums, clear pricing, and credentials like membership in Facebook’s Libra Association will win it clients.

McCauley says the biggest threat to Anchorage isn’t competitors, thoguh, but hazy regulation. Anchorage is building a core piece of the blockchain economy’s infrastructure. But for the biggest financial institutions to be comfortable getting involved, lawmakers need to make it clear what’s legal.

The Case for a Light Hand With AI and a Hard Line on China

By Nicholas Thompson
In a WIRED Q&A, the US chief technology officer warns against overregulating tech, underestimating the Chinese, and losing America's lead in quantum computing.

Conversational AI Can Propel Social Stereotypes

By Sharone Horowit-Hendler, James Hendler
Designers need to consider the ethics of gendering not just AI voices, but also their tone, speed, word choice, and other speech patterns.

InsightFinder gets a $2M seed to automate outage prevention

By Ron Miller

InsightFinder, a startup from North Carolina based on 15 years of academic research, wants to bring machine learning to system monitoring to automatically identify and fix common issues. Today, the company announced a $2 million seed round.

IDEA Fund Partners, a VC out of Durham, N.C.,​ led the round, with participation from ​Eight Roads Ventures​ and Acadia Woods Partners. The company was founded by North Carolina State University professor Helen Gu, who spent 15 years researching this problem before launching the startup in 2015.

Gu also announced that she had brought on former Distil Networks co-founder and CEO Rami Essaid to be chief operating officer. Essaid, who sold his company earlier this year, says his new company focuses on taking a proactive approach to application and infrastructure monitoring.

“We found that these problems happen to be repeatable, and the signals are there. We use artificial intelligence to predict and get out ahead of these issues,” he said. He adds that it’s about using technology to be proactive, and he says that today the software can prevent about half of the issues before they even become problems.

If you’re thinking that this sounds a lot like what Splunk, New Relic and Datadog are doing, you wouldn’t be wrong, but Essaid says that these products take a siloed look at one part of the company technology stack, whereas InsightFinder can act as a layer on top of these solutions to help companies reduce alert noise, track a problem when there are multiple alerts flashing and completely automate issue resolution when possible.

“It’s the only company that can actually take a lot of signals and use them to predict when something’s going to go bad. It doesn’t just help you reduce the alerts and help you find the problem faster, it actually takes all of that data and can crunch it using artificial intelligence to predict and prevent [problems], which nobody else right now is able to do,” Essaid said.

For now, the software is installed on-prem at its current set of customers, but the startup plans to create a SaaS version of the product in 2020 to make it accessible to more customers.

The company launched in 2015, and has been building out the product using a couple of National Science Foundation grants before this investment. Essaid says the product is in use today in 10 large companies (which he can’t name yet), but it doesn’t have any true go-to-market motion. The startup intends to use this investment to begin to develop that in 2020.

Listen to top VCs discuss the next generation of automation startups at TC Sessions: Robotics+AI

By Brian Heater

Robotics, AI and automation have long been one of the hottest categories for tech investments. After years and decades of talk, however, those big payouts are starting to pay off. Robotics are beginning to dominate nearly every aspect of work, from warehouse fulfillment to agriculture to retail and construction.

Our annual TC Sessions: Robotics+AI event on March 3 affords us the ability to bring together some of the top investors in the category to discuss the hottest startups, best bets and opine on where the industry is going. And this year’s VC panel is arguably our strongest yet:

  • Eric Migicovsky is a general partner a Y Combinator. Prior to joining the firm, he co-founded Pebble. The smartwatch pioneer was itself a YC-backed venture, along with raising three of Kickstarter’s all-time top crowdfunding campaigns. Migicovsky joined YC following Fitbit’s acquisition of the startup in 2016.
  • DCVC partner Kelly Chen focuses primarily on the AI, robotics, manufacturing and work-related sectors. Her work is generally focused on the world of hardware, along with the transformations of populations and labor.
  • Dror Berman co-founded Innovations Ventures in 2010 with former Google CEO Eric Schmidt. A key driver in the firm’s investments in Uber, SoFi and Formlabs, Berman also focuses on robotics, including companies like Blue River Technology and Common Sense Robotics.

TC Sessions: Robotics+AI returns to Berkeley on March 3. Make sure to grab your early-bird tickets today for $275 before prices go up by $100. Startups, book a demo table right here and get in front of 1,000+ of Robotics/AI’s best and brightest — each table comes with four attendee tickets.

❌