LimeLM
wyBuild
Support forum
wyDay blog
wyDay Home

The wyDay blog is where you find all the latest news and tips about our existing products and new products to come.


wyDay blog

Archive for the ‘Current Affairs’ Category

Roughly a year ago we made a change to how the LimeLM web API functioned. We changed the behavior from allowing API calls to originate from any device anywhere in the world, to only allow 1 device to use an API key. We did this for security reasons, which I’ll explain in-depth in a moment. But the side-effect was that this made our web API harder to use.

And we didn’t make this change blindly. We knew that this change would both increase our customer’s data security but it would do it at the expense of their ability to easily use our API.

Naturally, this caused outrage among some of our customers. Both because we made this change and because we made the change rapidly (over the course of about 2 weeks with short notice).

What our API does, and why we changed enforcement

LimeLM is our software licensing product. Companies integrate our components and libraries into their software so they can sell their software accurately to their end-customers. After a company integrates our components into their software, the typical workflow looks something like this:

  1. An end-customer buys the software from the company.

  2. The company sends them a product key.

  3. The end-customer activates the software to that device using that product key. (Locking it to the device).

None of those steps requires using the LimeLM web API, but you could use the web API in step 2 to automate the product key generation.

Since the launch of LimeLM about a decade ago we’ve always told customers to treat the LimeLM API key as a password and never embed the key in any apps or client-side JavaScript. We started with a big warning directly under the API key on the settings page (which is still there), and more recently we’ve added a full article devoted to account security.

Needless to say, many customers outright ignored this advice. Some were just careless with their API key and included them in public repositories and scripts accessible from outside their companies. Some just deliberately ignored our advice and embedded their web API key directly inside their apps that they gave to their customers. Thereby giving everyone with access to their app-binary access to their LimeLM account.

Enforcing the rule: limiting API keys to a single IP address

The proverbial straw that broke the camel’s back for this whole situation was when we saw unusually high web API usage from a particular web API key. It turns out this customer (a large software company) had embedded their web API key directly in their app. We investigated the abnormal API key-usage and saw that the key was being used from all over the world. It didn’t look like this customer’s data was being leaked, but there was so much noise in the data that we could not tell.

We immediately told them what we saw, re-iterated that a web API key should never be embedded inside code that runs on the end-user’s computer, and we invalidated that API key.

Then we kept digging. And over a short period of time we saw some of our other customers making the same mistake. We notified them as well, blocked those API keys, and quickly implemented a 1-ip address per API key rule. And that’s where we’ve been for the past year or so.

A slightly more flexible future

In the near future (after we get out some higher-priority releases) we’re going to make the web API key usage slightly more flexible. Namely, we’ll allow an API key to be used from 3 to 4 IP addresses in the period of 72 hours. Meaning if you have a small pool of servers that uses static IP addresses this upcoming change will make things easier for you.

In the meantime these customers can create a new user (and thus new web API key) for every separate static IP address they need to use.

We will never allow more than a handful of IP addresses to use any single web API key. Some have requested we enable a range of IPs that include whole services (e.g. AWS). We will never do that. All modern services offer static IP addresses for all of their services. Even so-called “serverless” (🙄) servers (like AWS lambda or Azure cloud functions) have options to use static IP addresses. Google the particulars for your web-host of choice.

In the coming months we’ll make a few separate product announcements that will eliminate the need for most common web API calls in the first place. Thus eliminating these problems (and the need for separate servers) altogether. These solutions will be rolled out gradually.

Why do we care?

Why do we bother implementing any limit at all? It pisses-off some of our customers and it scares away other customers. The reason we do it is simple: we’re responsible for your customers’ data. Yes, these people are not our customers. But their data is in our servers and we need to ensure that that data is only transmitted to trusted end-points. By limiting API calls from a select number of IP addresses this forces you, as our customer, to consider your customers’ data safety and to properly implement data collection and transmittal.

Or, to put it another way: we actually take security seriously. So rather than implementing terrible security and writing a tepid PR-apology when data leaks, we’re proactive and go out of our way to ensure data doesn’t leak. It’s hard work and it sometimes requires usability trade-offs, but it’s what a serious software company should do. Namely, take data security seriously in the first place.

It’s not a perfect solution, but it’s a better solution than telling our customers to do things correctly (and hope & pray they do). How do we know? Because we tried that and a large number of our customers ignored us.

If you have questions, comments, or complaints about this policy shift, then feel free to comment down below.

– Wyatt O’Day
Founder & CEO of wyDay

wyDay is based in the United States and has customers from all over the globe, including in the European Union. This means that today is a big day. Why? The General Data Protection Regulation (GDPR) is now in full effect. I won’t bury the lede: we’re fully compliant with the GDPR.

For those who don’t know, or whose eyes glaze over at the sight of the word “regulation”, briefly, the GDPR can be best described as a new law that attempts to standardize scattershot regulations that various E.U. member states had thrown together over the years. Ultimately the goal of the GDPR is to protect user’s data (their names, information about them, including seemingly unimportant meta-data). And they way it does that is it gives “teeth” to the regulation in the form of big fines to any company that doesn’t adequately protect their customers data.

This is great news for users (everyone in the world). This means that the next company that experiences a major data breach will paying big for their under-staffing and lax security.

The only people grumbling about these new privacy regulations are companies that don’t want to invest time and money into securing their users data and companies that make money off of selling your data to 3rd parties. But make no mistake, the GDPR is great news for you as a person living in this modern world. Now we just need other countries to take similar steps.

The good news is that we’ve taken privacy and security seriously from day one. So the “costs” of GDPR aren’t terrible. We’ve had to do a few more bureaucratic and legal things that we wouldn’t have had to do otherwise (new Data Processing Agreement and an updated Privacy Policy). But as a pure engineering problem, the GDPR represents a set of best practices that we were already doing, plus a few extras.

The few extras — things we had planned on doing, but got pushed up due to GDPR coming into effect today — are better user-protections:

  1. Secure 2-factor authentication (i.e. 2fa not using SMS)

  2. Going the extra mile to ensure our customers are using secure passwords.

Two factor authentication (or 2fa)

Most people that would end up on this blog are tech-savvy and already know what 2fa is. But, briefly, it can be described as a second code that you have to enter after you’ve already logged in with your username and password. This second code has, in the past, come from an SMS message to your cell phone. However, recent reporting (and even guidance from the NIST) has shown that sending a “security code,” or any form of 2fa, over unsecured mobile network is a bad idea.

If SMS is bad, then what do you use for two-factor authentication? Enter the Time-based One-time Password algorithm (a.k.a. that Authenticator program on your phone that spits out 6-digit codes).

We’ve very recently rolled out 2fa in LimeLM. And, in the coming weeks, we’ll add the extra security by letting you force your employees to use 2fa if they want to continue to login to your LimeLM account. Read all about it in our “Account security” LimeLM help article.

Good passwords and verifying that you are who you say you are

The next item in the list of making you more secure is actually verifying that your passwords are good. This is not an easy problem to solve. For decades now you might’ve seen “password security” gauges / meters that are shown when you enter your password into a new web app. Aaron Toponce on Twitter recently posted an example used on a real website:

These gauges are worse than useless: they don’t actually tell you if your password is secure and they might tell you that an actually-secure password is bad. The most favorable description I can give these “password meters” is they’re nice-looking pseudoscience. Unfortunately it’s not just toy web-apps that fall for that pseudoscience. Security “aware” companies are prone to garbage-science (those images above are take from Symantec’s Norton product — and it’s still using those bad password indicators at the time of this post).

I’m not alone in the conviction that these password meters are useless (see: Password Strength Indicators Help People Make Ill-Informed Choices or Why you can’t trust password strength meters).

Minimum password length: 8 characters long

Previously we didn’t care how long your password was. We just assumed customers would make good decisions and not use stupidly small password. Now, we enforce a minimum of 8 characters for passwords (as recommended by the NIST).

Verifying your password has not been compromised in 3rd party data leaks

So if “password meters” are garbage, how do we ensure you use a good password and, similarly, how do we actually verify that you are who you say you are? Forcing longer passwords solves part of the problem. As does enabling two-factor authentication. But we’re also going a step further and checking if your password has been compromised in another data leak from another company. We’re doing this by using the fantastic data from “have I been pwned?”. This allows us to verify the password you’re using hasn’t already been compromised (or is so weak that a billion other people use it as their password).

All of these things together (plus the intrusion detection built into our back-end) ensure your data remains safe and secure.

– Wyatt O’Day
Founder & CEO of wyDay