Post by
headless_monkeyman | 2014-10-22 | 23:20:06
Come to think of it you could do calculations using the free AWS "micro" instances, spinning them up and maxing them out, then disposing of them as soon as the demand drops. Amazon penalizes you for an extended load level of any more than a couple of users, but you can do huge bursts of calculations with them with no objections (and they are free). Many a young blog programmer have beaten their heads in trying to figure out just why the "training" servers work so backwards.
I'd love to know a bit more about how you set up Google and how it is working for you. My current project is more worried about regional access speeds to central processing and data sources than scale. (But in the past I've had to do most systems with the biggest concern being the inability to calculate growth of the audience. Would a game launch to tens or hundreds of thousands? Hundreds next week or _millions_? The hazards of doing entertainment for Disney & the like. Most plans back in those days involved a lot of grey boxes with CISCO on the front and wads of money).
Let me know if you want another 2 cores with an SSD back covering any particular geo area. I've got SF, NY, Amsterdam, Singapore and I think Australia available at the moment. Have you considered putting the route calculator into a Docker container? In fact you can put anything you want into one, dependencies and all and it will run as expected anywhere you can get time for it, isolated and with protected resources. This is a poster child for Docker. If I was doing this permanently I'd take those containers, manage their development with git, and have Dokku (the words most useful 100 lines of Bash code) distribute them to where they run directly from the Master branch.
With apologies, I forget that a lot of people come here to play a game and get away from computer talk and the like, I'll send you my e-mail via contact in case we ever want to chat about this stuff later.
Cheers, and fair winds,
Kurt