Securing server with access control and environment segregation.
Server optimization and process correction as a system administrator

Imagine you are a part of a fast-growing tech start-up. You are working hard day and night to create awesome things. And it’s obvious that you are dealing with GBs of data every day which is stored on your server. What would happen if one day your server shuts down? Even imagining this is a nightmare.

Servers are the cog in the wheels of the IT industry. Maintaining sanity of the servers is a crucial need of the hour, this is where the role of system admin is very crucial. We at ColoredCow were lucky to foresee the severeness of this issue from its early warnings and thus our system admins came up with the solution to segregate the environment on the server.

Somewhere in the midst of our goal to make things happen and the execution, we realized that we had taken some shortcuts. These shortcuts, though beneficial at their time by helping us ship things faster, would no longer contribute to our sustainability. In fact, if not taken care of in time, these would have definitely lead to a negative impact and in the worst case, the complete shutdown of what we’ve built. That is too deep but it was the reality. We needed to correct the situation and do things the way they should be.

 

We have been using AWS Elastic Compute Cloud (EC2) for hosting all our projects. We realized that the following loopholes in our processes would take us down:

  1. Hosting multiple test projects in a single EC2 instance. These were primarily used for User Acceptance Testing. The sad part was, coloredcow.com was hosted in the same box. If something wrong happens in a test project, it’ll affect our website too.
  2. CPU bursts and server crashes leading to downtime. The server couldn’t handle the heavy traffic for all our projects. Apart from coloredcow.com, the team also contributed to the crash inadvertently. While testing all our projects simultaneously, we were the main traffic for all those testing sites.
  3. The user management was missing on Production and everyone who has access was logging in as ec2-user, which by default, has all root privileges. So anyone could’ve deleted any files, database or could’ve misconfigured our server configurations. And the worst part was, no one could’ve known who did this (we all were same: ec2-user).
  4. Apart from root access, there was a possibility that someone updating his/her project for testing, could’ve unknowingly changed something on another project whom he/she isn’t a part of. It didn’t happen but could’ve been a probable scenario. So limiting users to their projects only were also needed.

 

Apart from the CPU burst, all other problems were foreseen. These could’ve been a huge barrier to our growth since CodeTrek is taking shape and we’re starting to get traction and value out of it. We decided to go with a new server which will host our projects for testing purposes and keep the Production box only for coloredcow.com. This reduced the load on our primary server. We migrated all our test projects to the new box and removed them from the Production. We then implemented a user management module on both the boxes. This helped us managing users and put restrictions on them for accessing/modifying server configurations and other projects as well.

CHALLENGE

The absence of user management leading to an uncontrolled server access.

OUTCOME

User authentication and authorization gateways on our servers. An administration module that grants read-write privileges to a user for projects.

TECHNOLOGY

Amazon AWS,
Linux system administration,
Users and groups,
File permissions,

Tech infrastructure is one of the pillars of a company. One should take special care to set this up right.

Vaibhav Rathore
Next case study