GrapheneDB Blog

Updates from the GrapheneDB team

Neo4j 3.0 on GrapheneDB

Neo Technology recently released Neo4j 3.0. In addition to some optimizations to Cypher and other features, perhaps the most exciting new highlight of the latest release is Bolt, which is “a new network protocol designed for high-performance access to graph databases” (read the release notes here).

Bolt will seek to introduce a completely new way of working with Neo4j, one which seeks to “consolidate and refine” how you may already work with Neo4j. The spirit of Bolt is not only to improve performance, but also to improve the developer experience.

The goal of Bolt is to consolidate and refine all the ways of doing work with Neo4j, providing more consistent access with a pleasant separation of responsibility. If you want to learn more about Bolt, read this excellent interview with Neo4j’s Nigel Small.


We are excited to announce that Neo4j 3.0 is available on all our plans. We’ve been working very hard these last weeks to enable support for it on every tier; self-service and automated, as you are used to from our service. If you want to give it a go for free, follow this link and you’ll be trying out Bolt in a matter of seconds.

GrapheneDB Now Offers Wider Support of AWS Regions

At GrapheneDB, we are committed to giving the best possible service through all of our deployment options.

We are happy to announce today support for 5 additional AWS regions on our Standard plans, and 3 additional regions on our Performances plans. Below, you can find our expanded list of supported AWS regions:

  • US East (N. Virginia)
  • US West (Oregon) - NEW ON STANDARD TIER -
  • Europe (Ireland)
  • Asia Pacific (Sydney) - NEW ON STANDARD TIER -

New AWS regions

If you want to change your existing database to one of these new regions, you can use our self-service clone feature and select the new AWS region of your choice when cloning.

Announcing Metrics General Availability

A select number of our Performance customers have been testing the beta of our new metrics dashboard for the last couple of months. We are very grateful for their valuable help in testing out this feature before officially rolling it out. We are excited to announce today that the new Metrics feature is out of beta and is now available for all of our customers on the Performance tier.

Our metrics dashboard will allow you to track server errors and see when errors are happening. You will also be able to track median and 95th percentile response times, as well as see incoming query and request throughput.

Metrics Dashboard

With our new Metrics dashboard, not only will you be able to see what is happening now, but you will be able to get historical information for the last few days by changing the time window of your reports.

No one operates more Neo4j database instances in the cloud than GrapehenDB. This gives our team great insight into how to best equip our customers for success. We understand that giving our customers knowledge about how their servers are performing in real time can help them diagnose key performance issues, keeping them on track for success.

We highly recommend you give our new Metrics dashboard a try and see how powerful it is for yourself. We have big plans for performance diagnoses in 2016, so you’ll want to stay tuned to this blog or our our Twitter page page for updates.

New: Introducing Flexible, On-demand Server Maintenance

GrapheneDB needs to perform maintenance on its servers from time to time. Maintenance procedures will vary depending on your database plan.

On Hobby databases, which are designed for development and testing, we might perform maintenance with downtime without prior notice.

On Standard databases, our entry-level production tier, we will usually send a notification a few days in advance with a specified time window during which we will perform maintenance and an estimated duration for the actual downtime.

Because of the multi-tenant architecture of the Standard tier, our schedule cannot accommodate preferences for maintenance windows.

Existing Performance and Enterprise customers might already know we offer 3 time windows to choose from for maintenance procedures. In order to accommodate customers who want to perform maintenance on specific periods of low traffic, we have been setting up custom time windows on special request.

In order to make this process even easier and more flexible, we’ve added a new on-demand maintenance feature. This feature allows customers on the Performance tier to trigger scheduled maintenance procedures on their database with the click of a button, when it’s more convenient to them. This provides customers with the flexibility to set downtimes when they are less damaging for their business.

When a database instance has a maintenance operation scheduled, a warning with some details will be displayed in the UI:


You will also see the a warning in the database overview with details about the maintenance, like a description, estimated downtime and a deadline.

Scheduled maintenance

Customers on the Performance tier can trigger the maintenance procedure by clicking on the Launch maintenance button.

While the operation is in progress, a dialogue will inform you of the state of the procedure.

Maintenance in progress

When the operation is complete, the pop-up will be dismissed, and any warning messages will no longer be displayed. At this point, no other changes will be necessary, but you may have to wait a few minutes for the DNS to propagate.

Please note: Unless a maintenance procedure is initiated by a user before the deadline, the maintenance will run automatically at the time specified in the maintenance warning.

New Feature: Cloning GrapheneDB Instances Without Downtime

GrapheneDB has been offering a database cloning feature for a long time. This feature is useful when scaling vertically (i.e. cloning into a higher or lower plan) or when testing Neo4j compatibility before performing an upgrade.

Up until now, the clone feature shuts down the databases to export the entire dataset, then creates a new database where the dataset is restored. We’re excited to announce a new feature that enables cloning a database without any downtime.

This new feature is very useful in cases when you want to avoid downtime, or when the dataset is static and the database is not taking any writes.

From the list of backups of database, click on the “Create DB from backup” option on the right-hand side. This will lead you to the “Clone an existing database” page where you will be able to select further options and complete the cloning process.

Clone from backup

Alternatively, you can use the “Upgrade/Clone” button accessible from the database overview or the “Clone database” button from the databases index. This will take you to the “Clone existing database” page, where you can select the “Clone from backup” option.

Clone existing database

Please keep in mind that the new database will have a different connection endpoint, and the origin database will not be deleted. You will need to update your application’s connection settings to your new cloned database and ensure you have deleted the origin database to avoid being charged for a database you no longer need.


As always, we welcome any feedback you might have on this new feature via Twitter @graphenedb, or via contacting us through

Heroku Add-on With Shareable Multiple Installs

In the past, provisioning Neo4j instances with the GrapheneDB Heroku add-on has been limited to one database per application.

Our customers often provided feedback that they wanted to spin up additional database instances to perform tests, do database sharding, or to have different databases for different purposes connected to the same app.

To provision additional databases, customers relied on workarounds such as creating dummy apps, adding environment variables manually to share the GrapheneDB connection URIs across apps. Needless to say this can be an error-prone hassle.

We have heard your feedback, and we are excited to announce that we’ve improved our Heroku add-on by adding support for multiple installs per app, and making them shareable across apps.

Heroku add-on

Below, you can find an example of how to provision a database by specifying the name of the database as well as the app.

$ heroku addons:create graphenedb:chalk --name my-other-graphenedb-db -a my-graphenedb-app
Creating my-other-graphenedb-db... done, (free)
Adding my-other-graphenedb-db to my-graphenedb-app... done
Setting GRAPHENEDB_MAUVE_URL and restarting my-graphenedb-app... done, v4
Your Neo4j database is being deployed. It can take some minutes before it's ready for use.
Use `heroku addons:docs graphenedb` to view documentation.

Please note that to avoid collision of environment variable names, the following logic is applied if a name is not specified:

  • If there is no GrapheneDB database for the specified app, the database will be named graphenedb and the environment variable will be GRAPHENEDB_URL.

  • If there are already other GrapheneDB databases and the graphenedb name is taken, a random color will be appended to the graphenedb name. An example of the resulting environment variable would be GRAPHENEDB_COBALT_URL

An existing GrapheneDB add-on database can be attached to an app using the attach command:

$ heroku addons:attach my-other-graphenedb-db -a other-graphenedb- Attaching my-other-graphenedb-db to other-graphenedb-app... done
Setting GRAPHENEDB_COBALT vars and restarting other-graphenedb-app... done, v6

If the GRAPHENEDB_URL environment variable is not taken, GRAPHENEDB_URL will point to the most recently attached database. Otherwise, a random name will be assigned (as in the example above GRAPHENEDB_COBALT_URL).

You can also specify the desired name when attaching an add-on with the –as parameter:

$ heroku addons:attach my-other-graphenedb-db --app other-graphenedb-app --as GRAPHENEDB_FOO
Attaching my-other-graphenedb-db as GRAPHENEDB_FOO to other-graphenedb-app... done
Setting GRAPHENEDB_FOO vars and restarting other-graphenedb-app... done, v8

We look forward to hearing how you use the new and improved Heroku add-on! Feel free to tweet us about your experiences at @graphenedb on Twitter, or email us at if you have any questions.

3 Major Graph Database Technology Trends to Watch Out for in 2016

GraphConnect 2015

In a recent article on Forbes, Neo Technology CEO Emil Eifrem shared his predictions for the graph database space in 2016. If you haven’t read them already, the article is definitely worth a read.

We were inspired by Emil’s predictions and gathered our thoughts on trends we see will rise in 2016.

Trend #1: The Rise of Data-Driven Decisions

Startups are already known for being extremely data-driven in their decision-making process, so this may not be something new. However, we are seeing more and more that disciplines that were not data-driven in the past are adopting this way of thinking.

Take Digital Marketing, for example. Even looking back two years ago, the success of a digital marketing campaign was still hard to measure. Looking back further than that, things become even more cloudy. Do you measure the success of a digital campaign through just Facebook Likes or Retweets on Twitter? That seems simplistic and not truly reflective of user activity (i.e. how many retweets result in sales?). Graph technology can help marketers make sense of data relationships and connections throughout the course of a campaign in order to get a more holistic picture of a user’s journey.

While there was a reluctance in the past from some practitioners in the digital marketing space to adopt data-driven decision-making (due to fear that it would expose their marketing efforts as being ineffective), leveraging data to make decisions as it relates to marketing campaigns has become the only way to stay competitive. If you’re making decisions based on assumptions, you’re not going to be very successful.

It’s not just marketing that is adopting data-driven decision-making at a rapid pace. Everyone is simply becoming more data-driven. Companies can use data to shape upcoming products, see how well a feature is working, or even streamline internal processes.

Trend #2: Data becoming increasingly interconnected

This one again is nothing new. Data has always been interconnected, but we have always been storing it in a “flat” way. We have been oversimplifying data models because the technology to look at data in its natural “interconnected” way was simply not there. Thanks to graph database technologies, we are now discovering new opportunities and new ways to look at how data is interrelated.

User behavior is very complex, and looking at things in isolation as we have been typically doing so far, is really an oversimplification. We have been segmenting data far too long. Simply because we didn’t have the technology. Graph databases came along because data has always been interconnected.

For example, take a user’s purchasing decision. If you only look at when a user made a purchase and what they purchased, you’re missing a lot of the important factors that led to the purchase. That’s the data that will help you make better decisions about how to engage your users! More and more companies, such as Adidas and Walmart, are starting to adopt graphs because they are a superior option in understanding how users make purchasing decisions. This enables companies to target actions and campaigns that work. Being smarter about the user is where the market is headed, and graph technology helps with that.

Perhaps you had suspicions that you could make sense of data in this highly interconnected way, but you never really had the tools. Now, with graph technology, we’re seeing a new way of thinking about data. It’s a paradigm shift and a whole new world of opportunities!

Trend #3: Polyglot persistence

Companies are now managing an increasing complexity in their system. For some time, there was a trend to implement systems in one technology stack. Maybe you did everything in Java because it was company policy. Looking at highly complex apps, like Uber or Airbnb, you cannot run such a complex operation with just one tech stack. You have to combine different technologies. There are now many different tools for any problem you need to solve. Everything is distributed, so companies are developing in an increasingly polyglot way.

Polyglot persistence means storing data in different databases, depending on what you need. You may have a Mongo, Redis, and Neo4j database for different requirements, as they all excel at different things. This set-up is becoming increasingly normal. You can no longer just pick one database or stack and stick with it, you need to pick the best tool for the job.

For example, if you wanted to build a video streaming and recommendations service, you could store videos on one central database, but have the recommendations engine on a separate database, such a graph database, that is better suited for making sense of connected data.

This polyglot way of developing systems does not require you to know every system or stack out there, but it does require deep collaboration between team members with different expertise in order to create complex applications. While polyglot development is not new, it is fairly new at the database level.

A couple of big trends we see in this space is the use of Apache Kafka to keep databases in sync, as well as more and more mature tools will entering the market to facilitate connecting popular databases to each other, lowering the bar for polyglot persistence.

Image credit: NeoTechnology.

Interview With Jean Villedieu of Linkurious

What is your name and what do you do?

My name is Jean Villedieu (@jvilledieu) and I am a co-founder of Linkurious, where I am in charge of sales and marketing.

Linkurious provides companies with data insights through graph visualizations powered by Neo4j, making it easy for end users, either data scientists or business analysts, to understand graph data.

We are a 5 person team based in Paris, but we have customers all over the world — mostly in the US, but also in Europe, China, Australia and South America. Some of our customers include companies that use our technology for fraud detection and medical research. One of our most notable clients is NASA.

What did you do before joining the Neo4j community?

I had met Linkurious co-founder Sebastian 3 years ago. Sebastian had created Gephi, a very successful open source, graph visualization platform. At this time, Sebastian already had the idea for Linkurious, I thought it was a cool idea so I decided to join him in starting Linkurious.

Did you find it risky to start a new company?

I found it exciting! I understood very quickly that there was an immense possibility for what we could do with the company. The world is already structured as networks it can be social links, transactions, the way ideas spread. These are networks. It’s a new way to present and think about information, which can empower you to make smarter decisions. I just saw a huge potential for this technology.

Working in the data visualization community in Neo4j, do you see any trends we should be aware of?

As companies store more and more data, and that data gets increasingly connected and sophisticated, graph technologies will be key in making sense of the data. Smart big data solutions will continue to have a high impact in the industry.

What is your favorite community project?

Linkurious.js is an open source project we support, which is free to use. Anyone can download it from Github. It’s even used for commercial projects. I’m always excited to hear about how people use it.

Just the other day, someone reached out to us. They are developing an application on GrapheneDB with Linkurious.js and they were psyched about it. That’s the beauty of having an open source project — anyone can use it and start creating something meaningful very quickly.

What is your favorite Neo4j use case you’ve seen?

NASA uses Linkurious to explore and manage data. They have a database of lessons learned. They explore data visually, making it easy to understand what went wrong, what went well, and not repeat mistakes. So, sending stuff to space is really cool!

The International Consortium of Investigative Journalists (ICIJ) used Linkurious to analyze data from HSBC Bank and a wide range of fascinating stories came out of their research. They were able to make connections and see how some shady corrupt businesses operate, which sparked a debate on offshore banking. There was a segment on 60 minutes about it and articles on The Guardian, and Le Monde. You can read more about this here, it’s a fascinating use case of making sense of data with our product.

Any parting words or tips you’d like to share?

Well, Linkurious is compatible with GrapheneDB! So if you want to try out our service and need an instance of Neo4j, GrapheneDB is definitely an option some of our customers use. Or you can also use Linkurious.js and GrapheneDB as mentioned earlier.

Interested in Linkurious? Sign up for an online demo.

Meet Cycli: The Best CLI Client for Neo4j

cycli - Query and update your Neo4j database from the command line

GrapheneDB operates the largest fleet of Neo4j databases in the cloud. As a result, we talk to a wide variety of customers every day, all with very different needs. One of the most common questions we receive across the board is how to query Neo4j from the command line.

You may already know that Neo4j ships with a CLI tool called neo4j-shell. While neo4j-shell might work fine locally, it can’t be used to connect to public-facing Neo4j instances that have been secured.

Luckily, there is a great tool called cycli that allows you to connect securely to remote servers using the Neo4j REST endpoint.

cycli output

Besides being able to connect to remote servers securely using authentication credentials and SSL, we’re big fans of cycli due to the following killer features:

  • Syntax highlighting colors that emulate the Neo4j browser, making it easy for neo4j users to understand and easily catch errors.
  • Smart auto-completion that not only suggests Cypher keywords, but also node labels, relationship types and properties based on your current dataset.

When customers ask us for recommendations on how to best query Neo4j on the command line, we always recommend cycli — it only made sense that we incorporate cycli into our product somehow to make things easier for our customers.

We’re excited to announce that we have now included a direct snippet for cycli in the GrapheneDB Connection UI, so you can easily leverage the power of cycli with GrapheneDB.


More about cycli

cycli is a CLI tool built by Nicole White, a data scientist at Neo Technology, who is also the maintainer of the R driver for Neo4j. cycli is implemented in Python and uses Nigel Small’s Py2neo to connect to Neo4j.

cycli can be installed using pip package manager:

$ pip install cycli

View cycli on Github.

If you’re interested in knowing more, Nicole published a great blog post, explaining how she implemented the smart-autocompletion feature using Markov chains. You can read it here. Nicole also recently made an update to cycli, you can read more about it here.

Find Us at GraphConnect 2015

GraphConnect 2015 is this week and we couldn’t be more excited to attend!

We are sponsoring the event and will be available at our booth all day to talk to anyone interested in GrapheneDB. If you’re an existing client, we’d love to touch base and see how you’re enjoying our service. If you’re considering using GrapheneDB, we’d love to talk to you as well to see how we can help you build something great with Neo4j!

What to expect at the GrapheneDB booth

There will be lots of goodies at the GrapheneDB booth. There will be plenty of swag to bring back home, plus we’re giving free credits towards a standard or production plan to those who come visit us at the booth.

Preview new features

We’ll be launching a new metrics dashboard feature soon, but if you’d like to get a sneak peek, please come find us. We’re looking for new or existing customers who may be interested in participating in the beta for this feature. Come get a demo of our new metrics dashboard and sign up for the beta.

Talks we’re looking forward to

In addition to sponsoring, we’re also looking forward to the following talks, so this is where you’ll find us during the conference.

We are, of course, looking forward to Emil Eifrem’s keynote at 9:00am. We can’t wait to hear what news he has to share with the community. In addition, we’re especially interested in:

  • “Real-Time Recommendations with Graphs and the Future of Search” by Michal Bachman, at 2:40pm.

  • “Advanced Neo4j at FiftyThree” by Aseem Kishore at 4:20pm.

  • “Polyglot Persistence for Microservices using Spring Cloud and Neo4j” by Kenny Bastani and Josh Long at 5:05pm.

Now you know where to find us and what to look forward to. Be sure to follow us on Twitter (if you aren’t already) for conference updates. We hope to see you there!