At the waning of the dot com boom, Joel Spolsky wrote a great list of best practices for development teams he called the “Joel Test”. It ended up covering things like using version control and automated builds, which were not as common in those Wild West days:

A score of 12 is perfect, 11 is tolerable, but 10 or lower and you’ve got serious problems. The truth is that most software organizations are running with a score of 2 or 3, and they need serious help, because companies like Microsoft run at 12 full-time.

Once software began “eating the world”, the engineering teams that sprouted up in every company pretty much learned what worked and the things in Joel’s list became taken for granted.

Today, the profession of software product management is undergoing similar growing pains. It’s just taken us longer to notice — probably because engineers typically outnumber PMs ten to one. Having dozens or even hundreds of PMs on a single app would have sounded silly ten years ago. But now it’s common.

As the profession has risen to prominence — some say even replacing investment banking as the go-to prestigious job for new grads — its reputation has sunk with the people actually building the products. Take this complaint from a Google engineer:

The PM’s that I have had occasion to work with have demonstrated an inability to do even the basic tenets of the position. They exhibit no ability to write even the most basic of product requirement documents, have no concept of ownership as it applies to being at meetings, discussions, reviews – unless it’s a management review.

While there’s plenty to be said about how to build great products or even be effective individually as a PM, this isn’t enough.

At several points in my career when I’ve worked on products big enough to have “teams of teams”, I’ve found that the culture and process of the PM group is as important as anything an individual PM does. Dysfunction at this level wastes not just PMs’ time but the time of all the people on their team who depend on them.

I started formulating a mental checklist that I run through when considering joining a large group of PMs on a mature product to see if they have it together. I thought it’d be fun to formalize it in the same manner as Joel’s list for engineering teams. Here goes!



Why it Matters: Product orgs in software companies make decisions in many different ways.

Some are heavily design-driven/vision-driven, where values and principles reign supreme, and are run as a kind of benevolent dictatorship. They require leaps of faith on things that may not be immediately verifiable through metrics.

Some are heavily metrics and KPI driven, rely on lots of experimentation and testing, and make a wide series of bets to see what works.

Where on this spectrum product groups should be is another discussion. And plenty of people have written about the ways teams can confuse the map for the territory. That’s all gravy. The thing you want to make sure of is whether people even have the same map. Frequently, they don’t!

How to Check: Ask people about how the company makes major (above the team level) product decisions and decides how to invest resources. If it sounds like KPIs are king, can every PM tell you exactly how the KPIs of major parts of the business are calculated and the tradeoffs between them (not just the ones for their product)? If it sounds like there is a broader vision or grand strategy that takes a leap of faith, do the PMs really grok it and believe in it? Are you getting similar answers from people?

Another positive sign is whether there are frequent, well-run meetings or written updates where the leads of the organization can candidy impart priorities or discuss major decisions with the PMs (i.e. separate from the general all-hands meetings that tend to be fluff).


Why it Matters: Formal reviews become necessary at some point in an org’s life, so they might as well be productive. They’re the key time to get input from your management on your roadmap and major decisions the team is working through. It requires often distilling complex, non-intuitive facts about the situation into digestible chunks and creating the necessary space for real discussion.

In the lousier orgs I’ve worked in, PMs present info to leadership for the very first time in heavily-rehearsed PowerPoints. This usually doesn’t leave enough time for the group to properly comprehend the context presented before weighing in on the decision. It also handicaps the team: it means you can only work on shallow problems that translate into snappy, charismatically-delivered reviews, like the thick-skinned tomatoes bred to be harvested by mechanical harvesters.

PMs in this situation learn to spend lots of time “pre-flighting” review content by routinely booking a flurry of 1:1s with key people the week before. The goal becomes making sure nothing interesting is debated or surfaced in the review itself. The review becomes a kind of ceremony. This is exhausting and time-wasting for everyone.

The best way to make this work is to bake it into the process for every review: require pre-reads to be sent at least 24 hours in advance, and to schedule time for the people in the meeting to read it and even leave comments (before the presenter arrives). A good review is a discussion where the materials are referenced but not presented. They should feel casual and you should leave with new insights and things to look into.

How to Check: Ask PMs how their last few product reviews with executives went down, and how these things are typically handled. See if pre-reads come up. If the product team has templates for review materials that dictate a certain structure or approach (not just pretty fonts and logos), this also a good sign, because it shows someone has at least thought about it (you can always ditch it if it doesn’t work).


Why It Matters: “Get out of the Building” is always a great reminder for companies of any size. It’s pretty much a platitude. But in the big companies that many PMs find themselves in, it’s harder than it sounds!

For one thing, tech company offices – at least the ones in Silicon Valley – are designed like casinos: there are few windows and clocks, they have excellent free food and drinks, they are located on remote reservations with free buses from major urban areas.

Environment aside, there may be also be institutional barriers. One may find interacting with users is bottlenecked through dedicated departments like UX Research or support (who are inevitably short-staffed). There may be some rule that regular employees are not to interact with customers or partners. I’ve seen teams operate a year or more without talking to real users.

Regularly talking to users is important for two reasons:

  1. It disabuses you of whatever internal frame of reference may have contributed to your roadmap decisions (other teams, KPIs, goals, politics, performance reviews) and forces you back into operating with an external frame of reference (that is to say, thinking of your product as something that exists in the real world and has to actually solve peoples’ problems to succeed).
  2. By listening to enough stories, your brain’s natural pattern-matching ability will kick in and reveal things you wouldn’t have known to look for with data alone.

How to Test: Ask PMs and their teams how often they talk to users. Ideally it should be once a week. If they can recall specific conversations or amusing anecdotes, this is good! You can also get a clue from asking PMs “What problem is your team solving?” or “What jobs does your product/feature do and how well?” It’s fine if the answer is framed in both internal terms (business problems) and external terms (user problems), or if they can talk about the tradeoffs between them.

It’s a red flag, though, if people exclusively talk about problems to solve with an internal frame of reference, or if the supposed “people problems” are fake, contrived ones that no human being would say they had. I’ve seen tons of decks with stock photos meant to represent user personas juxtaposed with fake quotes that sound right out of the Coneheads.


One of the things that took getting used to after moving to product management from engineering is a very different ramp-up period.

As an engineer, it felt like after a month or two internalizing the codebase, the stack, and the dev environment, I was moving at full capacity.

As a product manager, there are parts of the job that come quickly on a new team, but developing deep domain knowledge to the level of being able to produce any real insight that the team finds useful has taken as long as six months.

Apparent effectiveness

This all depends, of course, on how quick a study the PM is, how much domain expertise they come in with, how complex the domain really is, and how willing any old-timers are to get them up to speed.

The funny thing I’ve noticed, though, is that in many orgs I’ve worked in, PMs, EMs, and other leaders have a habit of departing just as they finally understand what they’re working on. I’ve even seen teams that are repeatedly spun up to solve the same problem from scratch after a previous one failed.

Institutional knowledge is only as lasting as the people at each level in the institution. Some companies claim they “reward failure” or “take away learnings” from efforts around the company, but this is bogus if nobody is around to remember what happened before them.

How to Check: Ask PMs about the major pivots their team has made (e.g. actual roadmap changes, or changing what they’re building). The given reasons for these should be mostly substantive realizations about the strategy or changes in the market, not reactions to meaningless org chart chaos. Another thing to look for is various types of “cultural infra” meant to get new people on a team up to speed on a complex domain, like training, videos, and wikis.


It may come as a surprise to some, but being a “scrum master”, writing progress reports, and making Gannt charts is not the primary responsibility of a PM. It’s one of the myriad tasks one might do in order for the team to be successful, but this can be safely lumped with the numerous “taking out the trash” or “nobody’s job” type tasks PMs end up doing.

The key work of a PM that is definitively their job and no one else’s is the critical synthesis of inputs to produce the strategy and roadmap for their team. Of course, this doesn’t mean being the decision maker; it means helping the team coalesce various inputs into a single coherent vision with some internal logic, rather than a miasma/probability cloud of things people thought would be cool. This is a task that requires actual intellectual rigor and focus (to the same extent that engineering does). This kind of work is never done. There are always more questions to ask, more inputs to get, more developments from competitors and partner teams to understand, and more angles to look at the data through. Then distilling it down to crisp framing that resonates with the team and helps them do their job.

This work is so important that it can’t — as is often the case — be left till the end of a day of harried meetings and errand-running as an afterthought, or done during the weekends. PMs thus have to operate on both a makers’ schedule and a managers schedule. To make the necessary time requires, in some cases, telling members of the team, in as diplomatic terms as possible, that you’re not their goddamn secretary. As Ben Horrowitz wrote in a now classic document:

Good product managers don’t get all of their time sucked up by the various organizations that must work together to deliver right product right time. They don’t take all the product team minutes, they don’t project manage the various functions, they are not gophers for engineering.

It should therefore be permissable, when needed, to shift administrivia and execution work onto other team members. It should be possible to build capacity and make people feel more comfortable with coordinating execution, communicating with other teams, even writing feature specs if necessary.

How to Check: Ask any PMs how they spend their time between strategy and execution. See how deeply they have wrestled with key questions their team has had on their own product.


Product data is too important to be left to the data scientists. Everyone on the team should have basic familiarity with the tables where product data is stored and be able to finagle a LEFT JOIN every once in a while. It ain’t rocket surgery!

This is important to free up your data scientist for the kind of in-depth, hard-hitting analysis that has the potential to inform product direction.

When I arrived at Facebook and attended the “Data Camp” part of orientation, I was so blown away by their approach to empowering every employee to examine product data with their investments in tools, infra, and training that I wondered why any company does it differently.

It’s certainly possible for teams to go too far with data analysis, metrics, and experiments. But there’s no excuse to not to have the tools and infra to let everyone on the team dig through product data on their own when they’re curious about something.

How to Check: Ask what kind of data tools and infra are available, and whether non-DS routinely query data for their product.


If the organization does some kind of “360˚ feedback” as part of performance reviews, PM managers — perhaps moreso than managers of other roles — need to pay particular attention to interpreting it thoughtfully.

For one thing, the PM job by definition entails deciding every day who to disappoint in the name of progress for the team. We can even take on vastly different personas in different meetings: data-driven in one, design-driven in another, opinionated in one, collaboative in another. Sometimes the team needs help framing strategy. Other times, the strategy is fine and the thing the team most needs is someone to run around moving organizational obstacles out of the way.

PMs and their managers should align on an explicit expectation of which things are important and what it’s okay to trade off (or who it’s okay to piss off). Good PMs will have an infinite list of things they want to do to help their team, but don’t have the time to do. So prioritizing, and being aligned on that prioritization with one’s manager, is the only way to attain any semblance of sanity and work-life balance. Good PM managers will be receptive to this discussion, and great PM managers will initiate it.

Far more important than the fairness of the rating1 in a performance review is whether the written feedback is actually thoughtful and insightful, rather than the typical hodgepodge of indiscriminately regurgitated peer feedback and thought-terminating cliches. As Michael Lopp writes in No Surprises:

A measure doesn’t help you in your career. Your performance review isn’t about comparisons to others. They’re about what you did and what you could do. What you’re looking for is the content.

How to Check: Ask PMs what kind of feedback they got from their past couple managers and whether their managers seemed to take coaching seriously.


The usual qualifiers of diversity (race, gender, orientation) should go without saying. Without diminishing that, I want to highlight a couple other aspects:

Non-Captive Engineers: Ideally, the organization should not be burning through engineers and treating them as replaceable. There should be enough senior engineers who are happy to stay on, and their situation should be such that, if the product is going in the wrong direction, they can speak up or move on (rather than having to tolerate the product team’s BS). This is, then, a vital feedback loop for ensuring good product management.

This breaks down if most of the engineering team is made up of new grads (they are scared because it’s their first job and busy ramping up on the raw technical component of the job) or H1-B’s (due to the US’s terrible immigration system, they would be deported if they quit or got fired). While new grads and immigrants make wonderful teammates, it is a sign of dysfunction if this is the only kind of engineering talent the team can retain. There should be a few free agents as well!

On Weirdo PMs: This is a personal and unfair bias, but I like working with PMs who went into it through channels other than the official ones, and are motivated out of their passion for building things and solving problems. Not because they feel it’s something they’re supposed to do, or because they think it’s prestigious, or because it is the default path from whatever elite school they went to.2 A team of people who are trained box-tickers and track-followers is no fun, and probably doesn’t apply much critical thinking to product problems. If I meet a team, I hope to find a few people even weirder than me.

How to Check: Get to know prospective teammates and gauge diversity of their backgrounds.


That’s pretty much my wishlist. It’s worth remembering not every organization is perfect, and it’s possible to fix a lot of these problems through influence. But it’s good to know what you’re dealing with!

If you’ve thought about similar things, I’d love to get your take. Send me an email or slide into my DMs!

  1. Every time I get worked up over my rating, I have to remind myself that the rating was never supposed to be fair anyway. The nature of the process (at least under stack ranking) is to output a normal distribution from inputs that, most likely, were not originally normally distributed. Losses and gains from individual teams are socialized – that’s just the deal. 

  2. I’m aware that being able to enter the tech industry without the “right” education or qualifications is, of course, its own form of privilege. My ability to somehow be employable by big tech companies as a product manager after a life of doing one dumb Homer Simpson-esque thing after another is probably benefited by having been a white, middle-class, male with savings and no dependents.