How We Ship: GitPrime

Our ultimate goal is to understand and share with you, at a nuts and bolts level, how great companies ship software to customers. To do that we're doing email interviews with engineering, product, and project managers at great companies to understand how they ship. If you'd like to share how your team ships software, get in touch!

Today's interview is with Travis Kimmel, CEO at GitPrime. GitPrime is an analytics tool that helps engineering managers see their team's work more clearly and understand software developer productivity.




Codetree: Because shipping software is context dependant, can you let us know your team size and any other context you think might be important for readers?

Travis Kimmel: We have eight people in our development team. While we’re a BI tool for software managers, we’re the first within this space to focus on actionable metrics derived from the codebase to help these teams succeed. Our team spends a lot of time working on new, exciting problems.

CT: How do you decide what to build next? What does that process look like?

TK: Both customer input and our vision for the product drive our roadmap. Sometimes we bring ideas to our customers, sometimes they bring them to us - but the most important thing is that we take the time to gather feedback at the concept stage before we build anything.

Simpler changes (say a small modification to an existing feature) go through a fairly lightweight triage process with our stakeholders. We figure out which of the changes can make the biggest impact for our user base, and prioritize the work from there.

CT: Could you sketch out what shipping a new feature looks like for you end-to-end?

TK: Every new feature is run through the product team. We will create high fidelity mocks from Sketch and show them to customers. What customers see is essentially a full proposed feature. If it passes muster, we implement it. Product will ensure that the requirements are acceptable to send to engineering. We always provide a robust set of feature requirements to the engineering team — all business logic is solved, any formulas that need to be calculated are included, and so on. We don’t want our engineers to have to guess or fill in major requirements gaps, so we have a fairly high standard for specs.

We don’t want our engineers to have to guess or fill in major requirements gaps, so we have a fairly high standard for specs

Once the product team approves the requirements, that spec will get carved into tickets for the engineers to implement.

CT: Do you estimate how long features will take to build? If so what does that process look like? Do you track if features are behind or ahead of schedule? Do you track how long they took compared to how long you thought they would take?

TK: For us, it depends on the feature. There are two types of features:

The first is a problem that’s already been solved by the industry (e.g. single-sign on) For this type, our estimates tend to be quite accurate since the problem already has a known solution, and typically someone on the team has already done it before.

And then there’s the second: a problem that hasn’t been solved yet. It’s not always knowable how long that’ll take, so we build in checkpoints. We’ll say “let’s try and get here by this date,” and on that day, we’ll stop and see where we are.

Instead of saying “this project will take N days,” we’re going to check in in 4-5 days, and see how things are going

Instead of saying “this project will take N days,” we’re going to check in in 4-5 days, and see how things are going. Sometimes we pull the plug because it just wasn’t working out, other times it’s going too slowly to keep investing in. Sometimes it goes great.

CT: How do you have a particular process for paying down technical debt and/or making significant architectural changes?

TK: Paying down technical debt is often within the scope of other work. Typically, our team will pay down a ton of technical debt while we are re-approaching a feature in order to upgrade it for our users.

Right now we tend to be fairly adverse to just paying down technical debt. Having a scope of work that is just paying down technical debt tends to snowball — it’s often not clear when it’s done, and it might lead to burnout. Instead, we’ll decide to build a feature, and while we’re in there we’ll focus on an area that we know needs cleaning up.

CT: How do you deal with delivery dates and schedules?

TK: It’s a hybrid of both. We do a weekly release, and then hot fix as needed. Things tend to ship when they’re ready, but we try to only ship once a week — it reduces the overhead for Ops.

CT: How many people are in your product + dev team? What are their roles?

TK: We have about 12 total — eight in engineering and four in product.

CT: Do you organize people by feature, or by functional team, or some other way?

TK: A little bit of both. We have two broad functional teams: one does a lot of front-facing features for customers, and the other focuses more on enterprise features. Our engineers will swap back and forth from one to the other — it’s more about having a couple of leads that can run it, and less about dividing the team up. Within those teams, engineers do own features.

CT: How do you know if a feature you’ve built was valuable to customers? What metrics do you use if any?

TK: If it’s a new feature, we like to put it in front of our customers and evaluate how valuable it is to them. Everyone tells you they love your stuff because they’re trying to be polite, so you have to dig in and ask things like “who would use this” to get a read on whether they’re actually excited about the feature, or if they’re nodding off. We look at metrics, though they're relatively simplistic (pageloads and whatnot). We generally use these to guide what questions we ask users, then rely on qualitative response from our customers and face time as a stronger indicator for what people want.

Everyone tells you they love your stuff because they’re trying to be polite, so you have to dig in and ask things like “who would use this”

CT: Do you have a dedicated testing role? How important is automated testing in your process? What tools do you use for testing?

TK: We don’t have a dedicated testing role yet. Today our engineers are responsible for their own test automation. As we grow, we’ll formalize that which will eventually include a testing role.

CT: Does your team create functional specs for new features? If so what do they look like? What do you use to create them? What does the process look like for reviewing them?

TK: Always. Our team creates very, very detailed specs.

That said, specs are not a bible — they’re an expression of intent. We’ll define the goal of the feature, what it’ll do if it’s successful, we include comps of what it should look like, and all formulas are provided. A single report will be 15-20 pages of documentation, before implementation.

Once we have doc together, we bring in engineering. They’ll ask a number of questions, and they might tell us that certain parts of the feature are not practical. There’s sort of this socialization process with engineering, until we both agree on a reasonable target to build. Then, they take a run at it.

Of course, some of this might change during implementation, but there’s always a very robust set of specifications.

CT: What about technical specs? What do they look like? Who reviews them? Who has final say on the spec?

TK: These tend to be in the same document. We tend to have a single spec for each feature, detailing the functional side, design side, and technical side.

CT: What’s the one tool your team couldn’t live without, and why?

TK: Sketch. It’s a design app that’s vital for our communication process. Everyone on the product team uses it — not just for specs, we’ll even use it to explain an idea to each other. It’s fantastic.

CT: What other tools are important to you in your engineering process?

TK: We use all flavors of git hosts, given our product. We generally bias toward letting engineers using whatever tools they want, and so there’s quite a bit of variation amongst our engineers on that front.

On the product side, there’s two key tools: Sketch (mentioned earlier), and Dropbox Paper, which has been great for creating documents of all types. We do everything on there.

CT: Do you triage issues? If so what does that process look like? How often do you do it?

TK: We triage issues twice a week, looking at any issues and figure out which ones are the most pressing. The most important part of that process is figuring out how stuck the user who reported the issue is — if it’s a bug. If we’re dealing with a little UI bug that’s not stopping them from using the product, we’ll fix it, but it’s not something we’ll lose sleep over. On the other hand, if a customer is really struggling and they can’t get value out of the app because of a bug, we’ll stay up late and fix it.

CT: What part of your current process gives you the most trouble? What is working the best?

TK: The stuff that works really well for us is trying to keep meeting overhead low. We really favor putting things in writing. It’s really nice to have a written record of every decision. Most decision making happens in writing, whether that be slack, or a doc, or jira, or something. That’s probably one of our highlights of our process that we’re doing well: favoring written communication keeps everyone on the same page. This also has a nice side effect of reducing meeting time overhead.

We really favor putting things in writing. It’s really nice to have a written record of every decision.

CT: What is something you do that they feel is different from what others are doing?

TK: We sort of touched on this already, but the level of detail that we put in our requirements is relatively unique. It lends itself well to our distributed team approach and reliance on async communication. We really don’t want engineers to ever be in a position where they’re trying to build something that’s a fuzzy target, so the level of documentation that we provide to engineering is very robust.

CT: Where do you get feature ideas from?

TK: These come in two forms: upgrades to existing features and brand new features. Typically, the best upgrade ideas come from our users.

For new features these typically start from talking with our users (or other people in the industry) about the problems they run into. From there, we get our research team involved to find out if we have enough data that can address those problems. When the answer is yes, we start turning that into a feature — and then we’ll see whether it’s valuable for our users and worth building.

CT: If you had to give your shipping process a name what would be it be?

TK: I’d call our shipping process “Punctuated Delivery”. We continuously deliver to a production-like environment, but we only ship to users once a week after the release has been tested.

CT: When you are building new features how do you handle bugs that get found before the feature ships? After it ships?

TK: It depends on the severity of the bug. Since we do a weekly release, we typically can wait unless it’s extreme. For critical bugs, we hotfix as soon as the fix is ready.

CT: What does your branching and merging process look like?

TK: Our CTO, Vincent Driessen, is the one that originally proposed git-flow, and our branching and merging process is a modified version of it. We use feature branches and the like, so it’s fairly standard.

CT: How is new code deployed?

TK: New code is put on a test environment — there’s both automated and manual testing. The product team will make sure everything looks good, and then we’ll release it late at night so we can deal with it if anything goes wrong.

CT: How often do you ship new code to customers?

TK: Weekly.

CT: Do you use feature flagging? Staging servers?

TK: Yes, both. Feature flags help us slow-roll new features to a test batch of users before turning a new feature on for everyone.

CT: Do you use Sprints? How long are they? How is that working out?

TK: An outside view of our team might say that our sprints are one week long, but that’s not how we treat it. We don’t have a formal sprint process. Our engineers do pretty amazing work, and for where we’re at there’s not a huge upside to forcing work into sprint buckets.

Generally features ship when they’re ready. The challenge is that sometimes work expands to fit the size of its container, so it’s important to keep communication open. We regularly check in and evaluate whether the work we’re focusing on at that time is still worth building.

CT: Are you guys remote? If so, how’s that working out for you?

TK: Yes. It’s amazing! Being remote allows us to work with the best people in the world, regardless of where we are all located.

CT: Any closing thoughts?

It’s important to encourage a culture of being open to engineers pushing back on requirements — or saying that they don’t fully understand

TK: It’s important to encourage a culture of being open to engineers pushing back on requirements — or saying that they don’t fully understand. The stakeholder receiving that feedback should ideally welcome that and improve the spec during a Q&A process that mops up any open questions. Definitely important that the output of this sort of thing is, again, written — even if it’s an in-person meeting, someone should take notes!

Interested in Engineering Management? Checkout GitPrime's weekly newsletter, Engineering Impact, filled with hyper-relevant content for engineering managers to help them keep current with trends in engineering leadership, productivity, culture, and scaling development teams.


Can we improve this? Do you want to be interviewed, or want us to interview someone else? Get in touch here!

Get better at building and shipping great software

Get emailed when new interviews with engineering leaders from great companies launch. We'll email you once a week, max.

Is managing your GitHub issues painful?

Codetree gives you lightweight project management on top of your GitHub issues. Try it for free for 14 days.