Curious about your Dev Ops

CA running on donations? :scream_cat: I’d rather not use them :slight_smile:

Lol… Yes, it is free like in Open Source…

They are not going broke anytime soon… take a look at who is behind Letencrypt…

Who? I do not see anyone with endless pockets. Am I looking at a wrong place?

I see Corp sponsors, but their total pay is a few million dollars a year (assuming they do not pay above the required membership level). That doesn’t feel enough to run a large CA.

And they already ask for donations. What if their donation goals are not met? What services will be scaled down or shut down?

Then you buy one and install it… lol I don’t see it as a big deal and besides, I like the idea…

If wikipedia can survive on donation when they are full of shit, why not that?

Agree, if the cert lasts 90 days and something happens where the CA dries up you have a window to get a new one purchased and installed without interruption.

1 Like

So I’m a big fan of AWS EC2 instances.

Right now you can get a nano instance for ~$4.25/month and it’s trivial to scale up when you need to.

I too use Linode for certain things, like dev/test environments, but AWS for production has always done me well.

1 Like

I was always scared of the unlimited potential cost of something I do in AWS… If they had like a 20$ month capped offering I would be fine but you pay for every byte… scary…

Well @Serge, i guess… :confused:

But I mean you control your outbound traffic, are you really going to not know that your fetching over 1TB of data for $5/month?

  • Outbound data transfer in excess of your plan’s data transfer quota is subject to overage charges. Please see FAQ for more detail

Oh the sail servers… lol I was thinking of EC2 instances… now, those I just discovered… thanks… I am so entrenched in my digitalocean/linode that I did not think they had such a service… I will check it out for sure… they have a free month offer so I will spin up a server to play with it…

1 Like

Your mileage will vary, but watch out for t2 instance types. They work on a system of CPU credits and don’t usually make sense to use with an active database (you can/will run out of credits quickly) or other periodic CPU-intensive tasks that may eat credits away quicker than they are replenished.

We use(d) a lot of t2.micro instances - aside from having to keep an eye on CPU usage, we found that nano/micro instances more often have issues. These either clear up in ~30m on their own, or you need to spin up new instances to replace them. t2.medium instances have yet to have such an issue (knock on wood). I think they cram a lot of micro/nano instances into each physical host - you may get more noisy neighbor issues aside from the instances failing more often.

(This is in the us-east-1 region, you may experience otherwise in other regions).

Just our experience - That all may be totally OK for you if you’re using multiple instances behind an LB and have automation in place to replace a server quickly.


Yea, you really don’t want to install your own DB on here, you want to use their RDS instances for anything DB related. They are pre-installed, fully managed, Database instances. Makes life easy :slight_smile:

If you did want to install your own DB (and why would you for a “getting started” bootstrapped product?) then you need to pick an instance type that works best for that specific use-case.

t2’s (and AWS in general) are great, cheap, little virtual servers that can serve up your bootstrapped products web traffic without issue.

AWS makes it trivial and cost effective to scale up when you need to, but stay cost sensitive when starting out.

At least, that’s been my experience…

You can ignore the whole CPU credits thing (and the likes) under this use-case, t2’s provide a base line that is sufficient for almost all needs when you’re bootstrapping a product.

I’ve found that you need to pay attention to that stuff when you are “pushing the envelope.” A problem we all hopefully have at some point :slight_smile:

My experience with them has been scaling up to over 100k visitors/day on AWS instances without issue.

Ok then, why bother while bootstrapping… None of my DigitalOcean instances have been any trouble with any thing I threw at them including database…

There are some large sites that run on those do and linode instances… without all of this massaging… and as soon as you start using those AWS services, it’s easy to get locked in…

I dunno… Maybe it’s because I am ok with doing my own devops…

Yea good question.

For me it’s always been that DigitalOcean/Heroku/etc… stop being cost & time effective for me once I get to a certain scale (traffic/usage volumes).

So now I’m locked into their infrastructures and I need to un-bolt all of it and migrate over to something that can handle it. For me that’s always been AWS.

That migration has always felt like such a painful waste of time (I typically do my own DevOps as well). I’d rather be driving more visitors into my product, or out sailing in the Bay :slight_smile:

So my solution, now, has just been to start somewhere that can handle the load “i hope” to get with my product(s).

This approach may not work for anyone else, but it’s been working for me…

1 Like

We often use Very cheap: a barebone Linux box with 8GB RAM and and SSD - 20$ per month. So far - they have been reliable.

We are running a dedicated DEV server which handles build/provisioning/deployment. Stack: typical combination of git/jenkins/nexus/custom shell scripts.

When clients ask we deploy to AWS, usually some sort of t2.

Recently started to use for AWS provisioning.

1 Like

I’ve been reading about this a bit, but haven’t had a chance to use it. How’s it been working for you with AWS? Curious how this would compare to using Docker up on AWS (and locally for a dev environment)

It’s great, especially for AWS.

Don’t get confused tho, it’s job is to setup infrastructure in AWS, not decide how you run your apps. Comparing terraform to docker is apples and oranges. (Apologies if I’m misinterpreting your question).

A typical workflow might be:

  1. Terraform build infrastructure
  2. Ansible (or chef or puppet or whatever) provisions servers (possibly ones created or setup by terraform)
  3. Some automated deployment deploys to web servers

Steps 2/3 there could change or be replaced by a socket based setup, but you may still decide to use terraform to setup the infrastructure, potentially even if using ECS.

1 Like

I use both mailgun and amazon SES. Mailgun is great for creating lists and sending out to the list. With amazon SES you have to send out the emails to your list members manually.

For sending emails we just use our Office 365 subscription. Or SES, if software is installed on servers managed by customer.

In our case it is actually 2 steps:

  1. Terraform builds infrastructure: EC2 instances, security groups, ELBs, Route53, S3, VPS, ElasticCache, ElasticFS, RDS,SES,creates SSL certificates, etc…
  2. Custom script, executed from Jenkins, ssh to environment and deploys build artifacts and configuration files.

The nice thing about Terraform is that you can use it to upgrade or downgrade existing live environment.

1 Like