I use unmanaged dedicated server, but I’m not completely happy with it… Interesting to know what other prople use.
What kind of website do you have in mind? Just a bunch of static file? A WordPress website? A custom application?
For Ruby on Rails apps that I work on - I use Heroku. My consulting sales page is hosted on MyDevil (a FreeBSD shell account) but could be hosted absolutely everywhere (just a bunch of static files). If I were to host a WordPress site I would go with a commercial provider and spent the time saved on growing my business.
I use AWS’s Elastic Beanstalk to host our website. The website is coded in Java with a MySQL database.
For static sites, I’d consider using AWS S3 static website hosting. A rather neat solution for maximum high performance.
I like the simplicity of Digital Ocean… I can create instances in a few seconds and they are cheep and reliable… The back end is a little simple though but once running I get in with SSH and forget they are virtual… lol
But you have to be willing to use raw, unmanaged instances… Although you could leave the management to companies like Server Pilot if you are running mysql and wordpress…
Hosting is not that big a deal as it used to be…
Softlayer or AWS. Have tried many others, these are the two that are the most reliable.
I have a non-managed Linode. I have it since late 2010, when my daughter was born and having a server 24/7 at home wasn’t a good idea.
In all that time I’ve only had one problem: TODAY. Someone used a Joomla security hole and uploaded some scripts to send spam, lots of spam (when I noticed it, there were 14000+ emails in Postfix queue). Tomorrow I’ll redeploy it, destroying all the current content.
So, if you rent a non-managed server:
- Stay always up-to-date (I’m still using Debian 6…)
- Don’t run your own email server. You can have a simple account at Zoho Email which is free when you have up to 10 addresses, and a mail server is a pain (spam, security, blacklists, etc).
So, if you rent a non-managed server:
- Stay always up-to-date (I’m still using Debian 6…)
I ran my own server on SliceHost (and then RackSpace after they acquired SliceHost). I always feared the day I’d be hacked via whatever latest security hole was discovered and not know about it. I’ve happily given up on managing my own server.
I think that with a standard distribution (like Debian, Ubuntu, CentOS, etc) it’s difficult to be hacked. IMHO the big problem is the “added” software like, in my case, Joomla. Because you can be on a managed server, but if you’ve installed an old software/script (an old Wordpress, for example), or your website is buggy, you can also be hacked.
Anyway, tomorrow I’ll reinstall the server with Ubuntu, and I hope that it will run fine for another 5 years
Famous last words!
Anything not regularly updated is open to a hack attack.
Do AWS keep the system up to date?
We use a variety of hosting options, including Heroku, Digital Ocean, AWS (specifically CloudFront), and managed hardware distributed in several data centers.
We use Heroku first, when possible, because it is easy to deploy to and get up and running fast.
Next, if it doesn’t fit on Heroku (because it needs access to OS features that we can’t use on Heroku, for example), then we run it on a droplet.
Static sites go to AWS CloudFront.
Finally we have managed hardware for our primary application and database, as well as for our name server network.
Thanks everyone for responses. Those virtual “cloud” servers look interesting. I’ve only been using VPS or dedicated servers, unmanaged, but I totally lack any experience with cloud servers. How virtual cloud servers, like mentioned here Heroku, AWS instances, Linode, and others, compare to a dedicated hardware server, in terms of
- hardware performance
- network performance
- OS features - can I have SSH root with full access, install anything I want etc
- overall reliability (how stable is performance and network)
How often are there any problems like the server is totally inaccessible due to some network problem in the DC (happens from time to time with any dedicated server provider I’ve used, no matter what price was paid), or server is poorly accessible from some locations, or “bad neighborhood” makes your instance run painfully slow?
The point is, I’m exploring alternatives to find the most reliable hosting solution for a web site that sells products. As everyone here knows, in online sales world web site downtime costs lots of money…
for WordPress sites, I am using http://hosting.io which offers also managed version, uptime monitor and backup.
I have there almost 10 sites and I am 100% satisfied.
Well, What I was trying to say is that using an UPDATED standard distribution it’s difficult to be hacked, but when you install regular software (Joomla, WordPress, etc), it’s when you can have problems, because updates of this software is not centralized with the distribution. Sorry for my really bad explanation
What makes you unhappy with your unmanaged dedicated server? From what you’re asking, is it that you’re concerned about the single point of failure? It is a risk, but you do get failures everywhere - AWS has had a lot more downtime than my dedis over the years, for example. A lot’s down to your monitoring and your provider.
The main practical difference between a dedi, vps and cloud is setup time and minimum contract period. Dedis can take a while to be provisioned (especially if your provider orders them in on demand), vps is usually faster (sometimes automated, but often not), but in the cloud the physical hardware is already powered up, so you have direct and immediate control over when you use them - with automated tools it’s a matter of minutes from requesting it to it being live. That means you can quickly add more resources to meet demand, using things like elastic ips and load balancing; it just tends to work out a bit more expensive than an equivalent dedi/vps if you run them all the time. Also, like a VPS, you don’t need to worry about monitoring the hardware. All the options are otherwise essentially identical - you pick what OS image you want, get root and go in to set it up, and you should still have separate backups and a dr plan.
Regarding Heroku, that’s a bit different: AWS EC2 is IaaS but Heroku is PaaS, which means it’s more like a locked down managed server - you basically send your app to them and they run it on AWS for you, swapping a bit of control for convenience. I think AWS elastic beanstalk is similar, but haven’t used it yet.
[quote=“radiac, post:15, topic:3623”]
is it that you’re concerned about the single point of failure? It is a risk, but you do get failures everywhere - AWS has had a lot more downtime than my dedis over the years
[/quote]Yes, the main concern is single point of failure and chance that the site and services it provides may become unavailable. As “cloud” servers do not guarantee 100% uptime, I think I’ll use a combined solution after all, just duplicate essential services on a cloud server.
Thanks for the information provided.
That’s similar to what I do - I have a remote hot spare (cheap dedicated rather than cloud, but the priciple’s the same) and rely on manual DNS failover using a low TTL - it’s a reasonable trade-off for cost vs reliability, although be aware that your TTL can often be ignored and records can be cached for longer for some visitors, so it’s not perfect.
A high availability setup is something you can sink a lot of time and money into and never need; it’s effectively an insurance policy against downtime, so it’s about finding a solution that is time- and cost-effective compared to the likelihood of worst-case failure, and the resulting loss of revenue. With my provider and setup I assume I’ll be able to replace my main server within a couple of days of it totally failing, and would be unlucky to have to do so more than once every five years; realistically I doubt I’d face significant long-term financial impact or loss of reputation from that, even without any failover, so slow DNS propagation for the handful of visitors it may affect in that time isn’t much of a concern to me.
Another approach would be to use a reverse proxy service like cloudflare, but then they become your new spof; you could also have your hot spare next to your live server with a shared IP for failover, but you still depend on your provider’s power and network; or you could have a mix of the above, which would make you significantly more reliable than all the large businesses who disappear whenever AWS goes down.
Beyond that, fast reliable failover means cutting out DNS and being able to move your IP between independent dcs, which means BGP - but that puts you in a different league (and price range) when talking to any provider, and probably isn’t worth the cost and effort for most businesses.
Digital Ocean + Forge. Greatest combo ever. (maybe Linode + Forge a close second)
AWS for static sites and Digital Ocean for the main app. Both services are great.
Static site on S3, cached with CloudFront. Works well, and is fast. Obviously imposes a few limitations because server side code isn’t possible.