I have recently visited the Percona Live! conference in Santa Clara, and I guess there were some really interesting things which should be bolded.
Last time I was checked, how can TokuDB be used as a drop in replacement of InnoDB. The first impressions were jolly good; way less disk space usage, and the TokuDB host can be a part of the current replication cluster.
After TokuDB was announced as a new storage engine for MySQL , it made me very curious, but I didn’t tried it out until now.
poking around in private view I found this old picture in a draft post from 3/3/13.
In order to know how much money to throw at hardware for kinja we occasionally benchmark it to see how much headroom we have left. Only instead of coming up with an artificial benchmark that tries to accurately represent our millions of daily users, we do the inverse and *remove* servers from live traffic to see…
It's a typical Wednesday morning for me. I had successfully made my way onto the train and I'm just sat there gazing out the window. That's when it struck me. I wonder what my most frequently used commands are.
the kinja frontend now talks to (most of) the backend api via the network & loadbalancers. previously it had all been over loopback.
Finally, a chance to sit down and talk about some Graphite! And not the awesome structure pictured above even though it does look like a sweet thing to talk about.
I showed in an
earlier post how to drop a whole database in a very safe way (no replication lag at all) and that technique is usable to drop a single table too, but cleaning up a table can take hours if not days to finish, so this is not the most comfortable way to do that. We also don't want to have even a small…
Currently we use manual failover on MySQL cluster @kinja, but there are more comfortable ways to achieve this. I have created a demo environment with vagrant where anybody can check out the MySQL Master HA in practice. (
Ahh.. we need more space!!
MySQL replication is great, and kind of reliable, but sometimes it could be messed up. The good news is that we can handle this.
this is a really good post: http://blog.memsql.com/cache-is-the-n…
This monday the core network device (thing in the first pic) froze up in one of our datacenters, instantly taking out 50% of Kinja's servers by rendering them unreachable.
In-depth Troubleshooting on NetScaler using Command Line Tools from David McGeough