OpenStack spins out its Zuul open source CI/CD platform

There are few open source projects as complex as OpenStack, which essentially provides large companies with all the tools to run the equivalent of the core AWS services in their own data centers. To build OpenStack’s various systems the team also had to develop some of its own devops tools, and in 2012, that meant developing Zuul, an open source continuous integration and delivery (CI/CD) platform. Now, with the release of Zuul v3, the team has decided to decouple Zuul from OpenStack and to run it as an independent project. It’s not quite leaving the OpenStack ecosystem, though, since it will still be hosted by the OpenStack Foundation.

Now all of that may seem a bit complicated, but at this point, the OpenStack Foundation is simply the home of OpenStack and other related infrastructure projects. The first one of those was obviously OpenStack itself, followed by the Kata Containers project late last year. Zuul is simply the third of these projects.

The general concept behind Zuul is to provide developers with a system for automatically merging, building and testing new changes to a project. It’s extensible and supports a number of different development platforms, including GitHub and the Gerrit code review and project management tool.

Current contributors include BMW, GitHub, GoDaddy, Huawei, Red Hat and SUSE. “The wide adoption of CI/CD in our software projects is the foundation to deliver high-quality software in time by automating every integral part of the development cycle from simple commit checks to full release processes,” said BMW software engineer Tobias Henkel. “Our CI/CD development team at BMW is proud to be part of the Zuul community and will continue to be active contributors of the Zuul OSS project.”

The spin-off of Zuul comes at an interesting time in the CI/CD community, which is currently spoiled for choice. With Spinnaker, Google and Netflix are betting on an open source CD platform that solves some of the same problems as Zuul, for example, while Jenkins and similar projects continue to go strong, too. The Zuul project notes that its focus is more strongly on multi-repo gating, which makes it ideal handling very large and complex projects. A number of representatives of all of these open source projects are actually meeting at the OpenDev conference in Vancouver, Canada that’s running in parallel with the semi-annual OpenStack Summit there and my guess is that we’ll hear quite a bit more about all of these projects in the coming days and weeks.

 

 

 

AWS adds more EC2 instance types with local NVMe storage

AWS is adding a new kind of virtual machine to its growing list of EC2 options. These new machines feature local NVMe storage, which offers significantly faster throughput than standard SSDs.

These new so-called C5d instances join the existing lineup of compute-optimized C5 instances the service already offered. AWS cites high-performance computing workloads, real-time analytics, multiplayer gaming and video encoding as potential use cases for its regular C5 machines and with the addition of this faster storage option, chances are users who switch will see even better performance.

Since the local storage is attached to the machine, it’ll also be terminated when the instance is stopped, so this is meant for storing intermediate files, not long-term storage.

Both C5 and C5d instances share the same underlying platform, with 3.0 GHz Intel Xeon Platinum 8000 processors.

The new instances are now available in a number of AWS’s U.S. regions, as well as in the service’s Canada regions. Prices are, unsurprisingly a bit higher than for regular C5 machines, starting at $0.096 per hour for the most basic machine with 4 in AWS’s Oregon region, for example. Regular C5 machines start at $0.085 per hour.

It’s worth noting that the EC2 F1 instances, which offer access to FPGAs, also use NVMe storage. Those are highly specialized machines, though, while the C5 instances are interesting to a far wider audience of developers.

On top of the NVMe announcement, AWS today also noted that its EC2 Bare Metal Instances are now generally available. These machines provide direct access to all the features of the underlying hardware, making them ideal for running applications that simply can’t run on virtualized hardware and for running secured container clusters. These bare metal instances also offer support for NVMe storage.

Contentstack doubles down on its headless CMS

It’s been about two years since Build.io launched Contentstack, a headless content management system for the enterprise. Contentstack was always a bit of an odd product at Build.io, which mostly focuses on providing integration tools like Flow for large companies (think IFTTT, but for enterprise workflows). Contentstack is pretty successful in its own right, though, with customers ranging from the Miami Heat to Cisco and Best Buy. Because of this, Build.io decided to spin out the service into its own business at the beginning of this year, and now it’s doubling down on serving modern enterprises that want to bring their CMS strategy into the 21st century.

As Build.io COO Matthew Baier told me, the last few years were quite good to Contentstack . The company doubled its deal sizes since January, for example, and it’s now seeing hockey-stick growth. Contentstack now has about 40 employees and a dedicated support team and sales staff. Why spin it out as its own company? “This has been a red-hot space for us,” Baier said. “What we decided to do last year was to do both opportunities justice and really double down on Contentstack as a separate business.”

Back when Contentstack launched, the service positioned itself as an alternative to Drupal and WordPress. Now, the team is looking at it more in terms of Adobe’s CMS tools.

And these days, it’s all about headless CMS, which essentially decouples the backend from the front-end presentation. That’s a relatively new trend in the world of CMS, but one that enables companies to bring their content (be that text, images or video and audio) to not just the web but also mobile apps and new platforms like Amazon’s Alexa and Google’s Assistant. Using this model, the CMS essentially becomes another API the front-end developers can use. Contentstack likes to call this “Content-as-a-Service,” but I’m tired of X-as-a-Service monikers, so I won’t do that. It is worth noting that in this context, “content” can be anything from blog posts to the descriptions and images that go with a product on an e-commerce site.

“Headless CMS is exciting because it is modernizing the space,” explained Baier. “It’s probably the most exciting thing to happen in this space in 25 years. […] We are doing for CMS what Salesforce did for CRM.”

Not every company needs this kind of system that’s ready for an omni-channel strategy, of course, but even for companies that still mostly focus on the web — or whose website is the main product — a service like Contentstack makes sense because it allows them to quickly iterate on the front end without having to worry about the backend service that powers it.

The latest version of Contentstack introduces a number of new features for content editors, including a better workflow management system that streamlines the creating, review and deployment of content in the system, as well as support for publishing rules that ensure only approved content makes it into the official channels (it wouldn’t be an enterprise product if it didn’t have some role-based controls, right?). Also new in today’s update is the ability to bundle content together and then release it en masse, maybe to coincide with a major release, promotional campaign or other event.

Looking ahead, Baier tells me that the team wants to delve a bit deeper into how it can integrate with more third-party services. Given that this is Build.io’s bread and butter, that’s probably no major surprise, but in the CMS world, integrations are often a major paint point. It’s those integrations, though, that users really need as they now rely on more third-party services than ever to run their businesses. “We believe the future is in these composable stacks,” Baier noted.

The team is also looking at how it can best use AI and machine learning, especially in the context of SEO.

One thing Contentstack and Build.io have never done is take outside money. Baier says “never say never,” but it doesn’t look like the company is likely to look for outside funding anytime soon.

Auth0 snags $55M Series D, seeks international expansion

Auth0, a startup based in Seattle, has been helping developers with a set of APIs to build authentication into their applications for the last five years. It’s raised a fair bit of money along the way to help extend that mission, and today the company announced a $55 million Series D.

This round was led by led by Sapphire Ventures with help from World Innovation Lab, and existing investors Bessemer Venture Partners, Trinity Ventures, Meritech Capital and K9 Ventures. Today’s investment brings the total raised to $110 million. The company did not want to share its valuation.

CEO Eugenio Pace said the investment should help them expand further internationally. In fact, one of the investors, World Innovation Lab, is based in Japan and should help with their presence there. “Japan is an important market for us and they should help explain to us how the market works there,” he said.

The company offers an easy way for developers to build in authentication services into their applications, also known as Identification as a Service (IDaaS). It’s a lot like Stripe for payments or Twilio for messaging. Instead of building the authentication layer from scratch, they simply add a few lines of code and can take advantage of the services available on the Auth0 platform.

That platform includes a range of service such as single-sign on, two-factor identification, passwordless log-on and breached password detection.

They have a free tier, which doesn’t even require a credit card, and pay tiers based on the types of users — regular versus enterprise — along with the number of users. They also charge based on machine-to-machine authentication. Pace reports they have 3500 paying customers and tens of thousands of users on the free tier.

All of that has added up to a pretty decent business. While Pace would not share specific numbers, he did indicate the company doubled its revenue last year and expected to do so again this year.

With a cadence of getting funding every year for the last three years, Pace says this round may mark the end of that fundraising cycle for a time. He wasn’t ready to commit to the idea of an IPO, saying that is likely a couple of years away, but he says the company is close to profitability.

With the new influx of money, the company does plan to expand its workforce as moves into markets across the world . They currently have 300 employees, but within a year he expects to be between 400 and 450 worldwide.

The company’s last round was a $30 million Series C last June led by Meritech Capital Partners.

Adobe now offers a free starter plan for its XD design tool

XD, Adobe’s user interface and user experience design and prototyping tool, came out of beta last October to join the group of products in the company’s Creative Cloud subscription program. Today, it’s expanding the availability of XD to a wider range of potential users with the launch of a free starter plan for individual users of XD. This plan is available to all users, no matter whether they are students or professionals.

The company also today announced the Adobe Fund for Design, a $10 million fund that will make investments and offer grants to companies in the Creative Cloud ecosystem, with a focus on XD.

“We want everybody to be fluent in the field of experience design,” Adobe Chief Product Officer and Executive VP (and Behance co-founder) Scott Belsky told me. He noted that experience design isn’t just for designers anymore, but also for marketers, the C-suite “and everybody in-between.”

For Adobe, XD is clearly a significant bet. It’s also the first major new product the company is launching and it’s in a market where others are trying to play, too, including popular tools like Sketch. While Sketch doesn’t offer a free plan, it’s hard not to look at Adobe’s move today as a sign that the company wants to take the competition head-on. And while XD is part of the somewhat pricey Creative Cloud plan, you also can get a $9.99 monthly license for XD only.

The free plan covers the MacOS and Windows versions of XD, as well as its mobile preview apps on iOS and Android, and it’ll include all of the design and prototyping features of the application.

Belsky freely talked about the competition and noted that Sketch is MacOS-only, for example, and that in his view, none of the competitors can match XD’s performance. “We believe that this is the best platform and industrial grade experience design solution out there,” he said. Belsky also noted he believes that, in the long run, XD will be as big as Photoshop.

As for the investment fund, Belsky noted that the company wants to optimize for flexibility. That means the fund is global and not just for investments but also outright grants. The idea here is to provide assistance to developers and startups that push the overall Creative Cloud ecosystem forward through plugins and integrations, though the focus right now is on XD. There is no time limit on this fund.

Adobe isn’t just launching these new plans and the new fund today. It’s also launching one of its regular updates to XD itself. As part of this update, the company is improving its integration with Sketch and Photoshop, for example, and it’s giving XD another performance boost to ensure it stays responsive (or “buttery,” as Belsky calls it), even with hundreds of artboards open. You can also now paste assets into multiple artboards and drag-and-drop assets to swap symbols. Password-protected Design Specs, which the company previously announced, are also not available as a beta.

Looking ahead, the company has a number of interesting new features on the roadmap. Maybe the most interesting of these are timed transitions, for when you want to design an onboarding experience, for example. Adobe’s group product manager for XD, Cicco Guzman, also demoed a new animation feature that allows designers to create more complex animations based on user input without having to learn a complex timeline-based tool. Quite a few designers today build these with other Adobe tools like After Effects, but the idea here is to keep them within a tool they have already mastered. “Part of what we’re trying to do with XD is to remove that friction that designers experience,” Guzman told me.

AWS introduces 1-click Lambda functions app for IoT

When Amazon introduced AWS Lambda in 2015, the notion of serverless computing was relatively unknown. It enables developers to deliver software without having to manage a server to do it. Instead, Amazon manages it all and  the underlying infrastructure only comes into play when an event triggers a requirement. Today, the company released an app in the iOS App Store called AWS IoT 1-Click to bring that notion a step further.

The 1-click part of the name may be a bit optimistic, but the app is designed to give developers even quicker access to Lambda event triggers. These are designed specifically for simple single-purpose devices like a badge reader or a button. When you press the button, you could be connected to customer service or maintenance or whatever makes sense for the given scenario.

One particularly good example from Amazon is the Dash Button. These are simple buttons that users push to reorder goods like laundry detergent or toilet paper. Pushing the button connects to the device to the internet via the home or business’s WiFi and sends a signal to the vendor to order the product in the pre-configured amount. AWS IoT 1-Click extends this capability to any developers, so long as it is on a supported device.

To use the new feature, you need to enter your existing account information. You configure your WiFi and you can choose from a pre-configured list of devices and Lambda functions for the given device. Supported devices in this early release include AWS IoT Enterprise Button, a commercialized version of the Dash button and the AT&T LTE-M Button.

Once you select a device, you define the project to trigger a Lambda function, or send an SMS or email, as you prefer. Choose Lambda for an event trigger, then touch Next to move to the configuration screen where you configure the trigger action. For instance, if pushing the button triggers a call to IT from the conference room, the trigger would send a page to IT that there was a call for help in the given conference room.

Finally, choose the appropriate Lambda function, which should work correctly based on your configuration information.

All of this obviously requires more than one click and probably involves some testing and reconfiguring to make sure you’ve entered everything correctly, but the idea of having an app to create simple Lambda functions could help people with non-programming background configure buttons with simple functions with some training on the configuration process.

It’s worth noting that the service is still in Preview, so you can download the app today, but you have to apply to participate at this time.

Favstar says it will shut down June 19 as a result of Twitter’s API changes for data streams

As Twitter develops an ever-closer hold on how it manages services around its real-time news and social networking service, a pioneer in Twitter analytics is calling it quits. Favstar, an early leader in developing a way to track and review how your and other people’s Tweets were getting liked and retweeted by others on the network, has announced that it will be shutting down on June 19 — a direct result, its creator Tim Haines notes, of changes that Twitter will be making to its own APIs, specifically around its Account Activity API, which is coming online at the same time that another API, User Streams, is being depreciated.

Favstar and others rely on User Streams to power its services. “Twitter… [has] not been forthcoming with the details or pricing,” Favstar’s creator Tim Haines said of the newer API. “Favstar can’t continue to operate in this environment of uncertainty.”

Favstar’s announcement was made over the weekend, but the issue for it and other developers has actually been brewing for a year.

Twitter announced back in December that, as part of the launch of the Account Activity API (originally announced April 2017), it would be shutting down User Streams on June 19.

User Streams are what Favstar, and a number of other apps such as TalonTweetbotTweetings, and Twitterrific (as pointed out in this blog post signed by all four on “Apps of a Feather”), are built on. Introduced as the Twitter Streaming API for developers, the aim was to provide a way for developers to get continuous updates from a number of Twitter accounts — needed for services that either provided alternative Twitter interfaces or a way of parsing the many Tweets on the platform — in a way that did not slow the whole service down.

The newer Account Activity API provides a number of features to developers to help facilitate tracking Twitter and using services like direct messaging for business purposes:

As you can see, some of the features that the newer API covers are directly linked to functionality you get via Favstar. The crux of the problem, writes Haines, is that Twitter hadn’t given Favstar and other developers that had been working with User Streams (and other depreciating functionality) answers about pricing and other details so that they could see if a retooling of their services would be possible. (Twitter has provided a guide, it seems, but it doesn’t appear to address these points.)

The post on Apps of a Feather further spells out the technical issues:

“The new Account Activity API is currently in beta testing, but third-party developers have not been given access and time is running out,” the developers write. “With access we might be able to implement some push notifications, but they would be limited at the standard level to 35 Twitter accounts – our products must deliver notifications to hundreds of thousands of customers. No pricing has been given for Enterprise level service with unlimited accounts – we have no idea if this will be an affordable option for us and our users.”

One of the consequences is that “automatic refresh of your timeline just won’t work,” they continue. “There is no web server on your mobile device or desktop computer that Twitter can contact with updates. Since updating your timeline with other methods is rate-limited by Twitter, you will see delays in real-time updates during sporting events and breaking news.”

Favstar has been around since 2009 — its name a tip of the hat to the original “like” on Twitter, which was a star, not a heart. Haines writes that at its peak, it had some 50 million users and was a “huge hit” with those who realised how the network could be leveraged to build up audiences outside of Twitter — including comedians and celebrities, tech people, journalists, and so on. It’s also tinkered with its service over time, and added in a Pro tier, to make it more user-friendly.

Somewhat unusual for a popular app, Favstar appears to have always been bootstrapped.

But there have been two trends at play for years now, one specific to Twitter and another a more general shift in the wider industry of apps:

The first, regarding Twitter, is that the company has been sharpening its business focus for years to find viable, diverse and recurring sources of revenue, while at the same time putting a tighter grip around how its platform is appropriated by others. This has led the company to significantly shift its relationship with developers and third parties. In some cases, it has ceased to support and work with third-party apps that it feels effectively overlap with features and functions that Twitter offers directly.

In the case of Favstar, the service rose in prominence at a time when Twitter appeared to completely ignore the star feature. MG once described the Favorite as “the unwanted step child feature of Twitter. Though it has been around since the early days of the service, they have never really done anything to promote its use.”

Fast forward to today, and Twitter has not only revamped the feature replacing the star with a heart (I still prefer the star, for what it’s worth), but Twitter uses those endorsements to help tune its algorithm, and populate your notifications tab, and to provide analytics to users on how their Tweets are doing. In other words, it’s doing quite a bit of what Favstar does.

And if you think of how Twitter has developed its own business model in recent years, with a push for video and working with news organisations and other media brands, the same early users of Favstar as detailed by Haines (celebs, news and other media organizations, etc.) are exactly the targets that Twitter has been trying to connect with, too.

The other, more general, trend that this latest turn has teased out is the one that we’ve heard come up many times before. Building services dependent on another platform can be a precarious state of affairs for a developer. You never know when the platform owner might simply decide to pull the plug on you. Your success could lead to many users, business growth, and even an acquisition by the platform itself — but it could nearly as quickly lead to your downfall if the platform views you as a threat, and decides to cut you off instead.

Interestingly, there could be some life left in Favstar in another galaxy far, far away. We’ve reached out both to Haines and to Twitter for further comment and will update this post as and when we learn more.

Adobe CTO leads company’s broad AI bet

There isn’t a software company out there worth its salt that doesn’t have some kind of artificial intelligence initiative in progress right now. These organizations understand that AI is going to be a game-changer, even if they might not have a full understanding of how that’s going to work just yet.

In March at the Adobe Summit, I sat down with Adobe executive vice president and CTO Abhay Parasnis, and talked about a range of subjects with him including the company’s goal to build a cloud platform for the next decade — and how AI is a big part of that.

Parasnis told me that he has a broad set of responsibilities starting with the typical CTO role of setting the tone for the company’s technology strategy, but it doesn’t stop there by any means. He also is in charge of operational execution for the core cloud platform and all the engineering building out the platform — including AI and Sensei. That includes managing a multi-thousand person engineering team. Finally, he’s in charge of all the digital infrastructure and the IT organization — just a bit on his plate.

Ten years down the road

The company’s transition from selling boxed software to a subscription-based cloud company began in 2013, long before Parasnis came on board. It has been a highly successful one, but Adobe knew it would take more than simply shedding boxed software to survive long-term. When Parasnis arrived, the next step was to rearchitect the base platform in a way that was flexible enough to last for at least a decade — yes, a decade.

“When we first started thinking about the next generation platform, we had to think about what do we want to build for. It’s a massive lift and we have to architect to last a decade,” he said. There’s a huge challenge because so much can change over time, especially right now when technology is shifting so rapidly.

That meant that they had to build in flexibility to allow for these kinds of changes over time, maybe even ones they can’t anticipate just yet. The company certainly sees immersive technology like AR and VR, as well as voice as something they need to start thinking about as a future bet — and their base platform had to be adaptable enough to support that.

Making Sensei of it all

But Adobe also needed to get its ducks in a row around AI. That’s why around 18 months ago, the company made another strategic decision to develop AI as a core part of the new  platform. They saw a lot of companies looking at a more general AI for developers, but they had a different vision, one tightly focussed on Adobe’s core functionality. Parasnis sees this as the key part of the company’s cloud platform strategy. “AI will be the single most transformational force in technology,” he said, adding that Sensei is by far the thing he is spending the most time on.”

Photo: Ron Miller

The company began thinking about the new cloud platform with the larger artificial intelligence goal in mind, building AI-fueled algorithms to handle core platform functionality. Once they refined them for use in-house, the next step was to open up these algorithms to third-party developers to build their own applications using Adobe’s AI tools.

It’s actually a classic software platform play, whether the service involves AI or not. Every cloud company from Box to Salesforce has been exposing their services for years, letting developers take advantage of their expertise so they can concentrate on their core knowledge. They don’t have to worry about building something like storage or security from scratch because they can grab those features from a platform that has built-in expertise  and provides a way to easily incorporate it into applications.

The difference here is that it involves Adobe’s core functions, so it may be intelligent auto cropping and smart tagging in Adobe Experience Manager or AI-fueled visual stock search in Creative Cloud. These are features that are essential to the Adobe software experience, which the company is packaging as an API and delivering to developers to use in their own software.

Whether or not Sensei can be the technology that drives the Adobe cloud platform for the next 10 years, Parasnis and the company at large are very much committed to that vision. We should see more announcements from Adobe in the coming months and years as they build more AI-powered algorithms into the platform and expose them to developers for use in their own software.

Parasnis certainly recognizes this as an ongoing process. “We still have a lot of work to do, but we are off in an extremely good architectural direction, and AI will be a crucial part,” he said.

AWS launches an undo feature for its Aurora database service

Aurora, AWS’s managed MySQL and PostgreSQL database service, is getting an undo feature. As the company announced today, the new Aurora Backtrack feature will allow developers to “turn back time.” For now, this only works for MySQL databases, though. Developers have to opt in to this feature and it only works for newly created database clusters or clusters that have been restored from backup.

The service does this by keeping a log of all transactions for a set amount of time (up to 72 hours). When things go bad after you dropped the wrong table in your production database, you simply pause your application and select the point in time that you want to go back to. Aurora will then pause the database, too, close all open connections and drop anything that hasn’t been committed yet, before rolling back to its state before the error occurred.

Being able to reverse transactions isn’t completely new, of course. Many a database system has implemented some version of this already, including MySQL, though they are often far more limited in scope compared to what AWS announced today.

As AWS Chief Evangelist Jeff Barr notes in today’s announcement, disaster recovery isn’t the only use case here. “I’m sure you can think of some creative and non-obvious use cases for this cool new feature,” he writes. “For example, you could use it to restore a test database after running a test that makes changes to the database. You can initiate the restoration from the API or the CLI, making it easy to integrate into your existing test framework.”

Aurora Backtrack is now available to all developers. It will cost about $0.012 per one million change records for databases hosted in the company’s U.S. regions, with slightly higher prices in Europe and Asia.

Fantasmo is a decentralized map for robots and augmented reality

“Whether for AR or robots, anytime you have software interacting with the world, it needs a 3D model of the globe. We think that map will look a lot more like the decentralized internet than a version of Apple Maps or Google Maps.” That’s the idea behind new startup Fantasmo, according to co-founder Jameson Detweiler. Coming out of stealth today, Fantasmo wants to let any developer contribute to and draw from a sub-centimeter accuracy map for robot navigation or anchoring AR experiences.

Fantasmo plans to launch a free Camera Positioning Standard (CPS) that developers can use to collect and organize 3D mapping data. The startup will charge for commercial access and premium features in its TerraOS, an open-sourced operating system that helps property owners keep their maps up to date and supply them for use by robots, AR and other software equipped with Fantasmo’s SDK.

With $2 million in funding led by TenOneTen Ventures, Fantasmo is now accepting developers and property owners to its private beta.

Directly competing with Google’s own Visual Positioning System is an audacious move. Fantasmo is betting that private property owners won’t want big corporations snooping around to map their indoor spaces, and instead will want to retain control of this data so they can dictate how it’s used. With Fantasmo, they’ll be able to map spaces themselves and choose where robots can roam or if the next Pokémon GO can be played there.

“Only Apple, Google, and HERE Maps want this centralized. If this data sits on one of the big tech company’s servers, they could basically spy on anyone at any time,” says Detweiler. The prospect gets scarier when you imagine everyone wearing camera-equipped AR glasses in the future. “The AR cloud on a central server is Big Brother. It’s the end of privacy.”

Detweiler and his co-founder Dr. Ryan Measel first had the spark for Fantasmo as best friends at Drexel University. “We need to build Pokémon in real life! That was the genesis of the company,” says Detweiler. In the meantime he founded and sold LaunchRock, a 500 Startups company for creating “Coming Soon” sign-up pages for internet services.

After Measel finished his PhD, the pair started Fantasmo Studios to build augmented reality games like Trash Collectors From Space, which they took through the Techstars accelerator in 2015. “Trash Collectors was the first time we actually created a spatial map and used that to sync multiple people’s precise position up,” says Detweiler. But while building the infrastructure tools to power the game, they realized there was a much bigger opportunity to build the underlying maps for everyone’s games. Now the Santa Monica-based Fantasmo has 11 employees.

“It’s the internet of the real world,” says Detweiler. Fantasmo now collects geo-referenced photos, scans them for identifying features like walls and objects, and imports them into its point cloud model. Apps and robots equipped with the Fantasmo SDK can then pull in the spatial map for a specific location that’s more accurate than federally run GPS. That lets them peg AR objects to precise spots in your environment while making sure robots don’t run into things.

Fantasmo identifies objects in geo-referenced photos to build a 3D model of the world

“I think this is the most important piece of infrastructure to be built during the next decade,” Detweiler declares. That potential attracted funding from TenOneTen, Freestyle Capital, LDV, NoName Ventures, Locke Mountain Ventures and some angel investors. But it’s also attracted competitors like Escher Reality, which was acquired by Pokémon GO parent company Niantic, and Ubiquity6, which has investment from top-tier VCs like Kleiner Perkins and First Round.

Google is the biggest threat, though. With its industry-leading traditional Google Maps, experience with indoor mapping through Tango, new VPS initiative and near limitless resources. Just yesterday, Google showed off using an AR fox in Google Maps that you can follow for walking directions.

Fantasmo is hoping that Google’s size works against it. The startup sees a path to victory through interoperability and privacy. The big corporations want to control and preference their own platforms’ access to maps while owning the data about private property. Fantasmo wants to empower property owners to oversee that data and decide what happens to it. Measel concludes, “The world would be worse off if GPS was proprietary. The next evolution shouldn’t be any different.”