
My current work project is a mobile app for a medical device company. The software is, depending on the eventual feature set, FDA regulated. This means we will eventually have to undergo a HUGE FDA audit on our codebase. Not a problem if we have out entire history with comments and tags tying commits to work items, right? Well, our source control server died.
Yesterday morning, our gitorious server decided to eat its virtual disk. IT is trying to get it pulled from a recent backup, but it’s a many terabyte off-site backup. But we have work to do. So we fired up a new git instance, pushed master from the dev who had the latest, and we started working again. Once gitorious is stood back up, we will push latest to that and our code reviews in crucible will work like a champ again. Yay!
Here is how we did it:
- create a bare git repository on a developer’s machine (mine).
git init --bare /c/GimmeTehCodez/project.git
- Share the directory with windows sharing. I limited access to only my team members, but had to set them all to Read/Write access.
- add a new remote for the mirror. Note the four /s in the remote URL. Two are a relative file URL, three are an absolute URL. Four is another computer. In this case it is Wallace who shares my GimmeTehCodez directory for coworkers to upload stuff that can’t be put in email. Kind of a neat hack, huh? No drive mapping needed!
git remote add mirror 'file:////Wallace/GimmeTehCodez/project.git'
- start by pushing the last good master back in to the repo
git push mirror master
And we’re off and running again! We could get a network share from IT, but that would take too long and distract them from getting my server back.
If we were using TFS and had a server issue like this, how long would we be out? If we’d lost our code history, how much worse would the FDA audit be with all that lost history?
Next step is to start thinking about a ‘hot standby.’. Probably not needed, but it would be cool. Then I can think about using git-tfs to make our tfs projects stand-by as well…
