Date
2024.12.20
Type
Article
Index
TXT_HIRING-AND-SCALING-DESIGN-TEAMS
Status
Published

Hiring and Scaling Design Teams

You’ll be responsible for growing and maintaining a high-performing team as a leader. Starting with hiring, I’ll discuss my approach to equitable hiring processes and tools for evaluating designers.


Table of Contents

  1. Be Kind
  2. Rubrics
  3. Phone Screens
  4. Case Studies
  5. Resourcing Models

Be Kind

If you take one thing away from this chapter about hiring, I hope it’s this: be kind.

If you’ve interviewed before, you know how much energy and anxiety is driven by the interviewing process, from pre-interview nerves to anxiously awaiting to hear whether you’re moving forward. As an interviewer, you benefit from an incredible amount of asymmetrical power. You are, after all, driving a decision that can permanently alter someone’s career and future.

So, when you’re interviewing, make it your priority to review a candidate’s materials. Read your recruiter’s notes (if you have one). Take the time to write questions that will give you the most information about whether they’ll succeed in their role. Be on time and prepared to be in the interview and nowhere else. Lastly, express gratitude for their energy and consideration.


Rubrics

Hiring designers is one of the most consequential responsibilities of design leadership. While intuition plays a role, the goal is to gather objective information about a candidate’s potential for success. This comes primarily from evaluating their experience with similar work in similar contexts, and when that’s not available, understanding their problem-solving frameworks and approaches.

Why Rubrics Matter

Rubrics create consistency across interviewers and candidates while reducing (but not eliminating) the impact of unconscious bias. They help teams focus on relevant skills and experiences rather than arbitrary preferences or pattern matching.

A well-designed rubric transforms subjective impressions into measurable observations. Instead of vague feelings about a candidate being “not quite senior enough” or having “great potential,” rubrics push us to identify specific behaviors and capabilities that indicate readiness for the role.

Crafting Effective Rubrics

Start by defining what success looks like in the role. For a senior product designer, this might include leading complex projects independently, mentoring associate designers, track record of launching products, and influencing product strategy. The rubric should reflect these key responsibilities through concrete, observable behaviors.

For each interview question or evaluation area, document three levels of responses:

Needs Development: “I would solve this by making it look nicer” indicates a surface-level understanding of design’s role in problem-solving. The candidate focuses solely on aesthetics without considering user needs or business context.

Meets Expectations: “First, I’d validate the problem through user research and metrics. Then I’d explore solutions through rapid prototyping, getting feedback from users and stakeholders throughout the process.” The candidate demonstrates a structured approach incorporating both user needs and business goals.

Exceeds Expectations: “Based on my experience with similar challenges, I’d start by analyzing our metrics to size the opportunity. I’d then conduct targeted research to understand user pain points, while working with engineering to understand technical constraints. This would inform a design strategy that balances user needs, business goals, and technical feasibility.” The candidate shows strategic thinking, draws from relevant experience, and considers multiple stakeholder perspectives.

Implementing Rubrics Effectively

Share the rubric with your entire hiring panel before beginning interviews. This ensures everyone understands what good looks like and helps calibrate expectations across different interviewers. Regular calibration sessions using real examples (anonymized) help maintain consistency over time.

Create space in your rubric for capturing specific examples and quotes. These concrete details prove invaluable during hiring discussions and help combat recency bias. They also provide excellent material for constructive feedback to candidates who aren’t selected.

Remember that rubrics are guides, not checklists. A candidate might show exceptional ability in unexpected ways that your rubric didn’t anticipate. Build in flexibility to capture these insights while maintaining the structure that makes rubrics valuable.

Learning and Iteration

Treat your rubrics as living documents. After each hiring cycle, gather feedback from interviewers about questions that worked well or needed clarity. Look for patterns in candidate responses that might suggest adjustments to your evaluation criteria.

Work closely with your recruiting partners and UX researchers to refine your approach. Researchers bring valuable expertise in structured evaluation and reducing bias. Recruiters can help identify which criteria might unnecessarily limit your candidate pool.

Beyond Individual Interviews

Extend your rubric thinking to other aspects of the hiring process, including other activities like case studies. This comprehensive approach helps ensure you’re evaluating candidates fairly at every stage.

Consider sharing portions of your rubric with candidates, particularly for case studies. This transparency helps candidates understand your expectations and prepare effectively, leading to better signal in your evaluation process.

The ultimate goal isn’t perfect prediction of success—some uncertainty is inevitable in hiring. Instead, rubrics help you make more informed decisions based on relevant evidence rather than implicit bias or incomplete evaluation. When used thoughtfully, they make your hiring process fairer, more consistent, and more likely to identify candidates who will thrive on your team.

Remember that rubrics serve your evaluation process; they shouldn’t constrain it. The best rubrics create structure while leaving room for the human elements of hiring—the spark of potential, the unexpected strength, or the unique perspective that could transform your team.


Phone Screens

Once you have some alignment on the product team context, you’re prepared to start speaking to candidates on the phone or via video. Thirty minutes is generally the amount of time these screens require. My agenda is:

  1. Introduction to me, the interviewer, including my role, how long I’ve been with the company, and my current location
  2. Information about the role, the larger business, product, or team context, and the size of the organization
  3. Role-specific questions informed by the rubric
  4. 5 minutes minimum for any candidate questions or clarifications

Thirty minutes goes quickly, so this is an opportunity to assess the candidate’s experience, verbal communication, and storytelling skills and ensure they match their goals. These go well when you can review each role-specific question and have preliminary data on their experience working in similar contexts.

Keeping in mind the massive power disparity and physiological stress response, I like to let candidates settle in with an icebreaker that is highly effective in setting the tone of the conversation:

“What, to you, is good design?”

I will usually dig deeper using whatever they answered with. For example, if they respond, “Good design is simple,” I might ask why they feel like so many products are so complex. I will also ask for products or companies that exemplify their definition of good design to understand how they examine other products and designs.


Case Studies

Credit: The UX leadership team at AppFolio brought this to life—specifically Carlisle Sargent with visualizing this early on and helping us find a format that works across design, leadership, and research.

Case studies are a common and often contentious practice in design hiring. While they can provide valuable insights into a candidate’s abilities, they also come with challenges and potential drawbacks. This guide balances gaining meaningful information about candidates and creating a positive, respectful candidate experience.

The Purpose of Case Studies

My approach to case studies is designed to assess:

  1. Storytelling ability
  2. Visual design skills
  3. Comfort in presenting and discussing their own work
  4. Understanding of different work contexts
  5. Collaboration and communication skills with cross-functional partners

Acknowledging Candidate Experience

We recognize that case studies can create anxiety and additional labor for candidates. To mitigate this:

  • No free work or labor that a candidate can’t reuse.
  • We structure the case study session to allow candidates to showcase their personality and design philosophy, not just their work.
  • We involve cross-functional partners to give candidates a realistic preview of the collaborative environment they’ll be working in.

Case Study Format

Our case study sessions follow this format:

  1. Personal Introduction (5 minutes): Candidate shares about themselves, including interests outside of work. This helps ease nerves and reminds us all that candidates are more than their professional personas.

  2. Design Philosophy (10 minutes): Candidate discusses their design process and philosophy. They share a product they admire and suggest one thing they’d add to improve it, explaining their reasoning.

  3. Case Study Presentation (20-30 minutes): Candidate presents their chosen case study. They should focus on their role, decision-making process, and outcomes.

  4. Q&A (15 minutes): Panel asks clarifying questions about the presented work. Candidate has the opportunity to ask questions to the entire team.

Rubric for Case Studies

Use this rubric to assess candidates consistently across different interviewers and roles. Rate each area on a scale of 1-5, where 1 is “Needs Significant Improvement” and 5 is “Exceptional”.

  1. Storytelling and Communication: Clarity of presentation, logical flow of information, engagement with the audience.

  2. Visual Design Skills: Quality of visual artifacts presented, understanding of visual design principles, appropriateness of design choices for the problem at hand.

  3. Problem-Solving and Process: Clear articulation of the problem being solved, logical approach to solving the problem, consideration of constraints and trade-offs.

  4. User-Centered Approach: Evidence of user research or consideration of user needs, application of user insights in the design process, ability to advocate for the user.

  5. Cross-Functional Collaboration: Examples of working with other disciplines (e.g., product, engineering), understanding of technical constraints and business goals, ability to communicate design decisions to non-designers.

  6. Impact and Results: Clear articulation of the project’s impact, quantitative or qualitative results if available, lessons learned and applied to future work.

  7. Design Philosophy and Critical Thinking: Articulation of personal design philosophy, thoughtful critique of admired product, ability to suggest meaningful improvements.

  8. Adaptability and Growth: Examples of overcoming challenges or setbacks, openness to feedback and alternative viewpoints, evidence of personal or professional growth through the project.

Remember, this rubric is a guide, not a strict scorecard. Use it to structure your thoughts and ensure consistent evaluation across candidates, but also trust your instincts and consider the specific needs of your team and role. You can also share your rubric with candidates to ensure you see the right examples in their work history.

After the Case Study

  • Provide timely feedback to candidates, whether moving forward or not.
  • If not moving forward, offer constructive feedback that the candidate can apply in future interviews.

Resourcing Models

At any time, if you’re given headcount, you should have a resourcing plan in place to determine how you’ll resource and staff your team. As a design leader, you’ll need to ensure your team shape matches the short and long-term goals of your organization. This starts with knowing how you’d allocate 1 or 10 new people at any time.

Experience Team Model

Ratio: 1:1 (Designer to Product Team)

Best for: Product development teams shipping customer-facing products

Key benefits: Deep domain expertise development, strong partnership with product and engineering, clear ownership and accountability, faster decision-making and iteration.

Considerations: May need additional support for platform/infrastructure teams, requires strong design system to maintain consistency, need mechanisms to share learnings across teams.

Platform Team Model

Ratio: 1:5 (Designer to Platform Teams)

Best for: Infrastructure, API, and platform teams

Key benefits: Efficient resource utilization, consistent approach across platform teams, shared learning and patterns.

Considerations: Need clear prioritization mechanisms, may require “office hours” or similar support models, focus on developer experience and tooling.

Agency Model

Ratio: 1:Many

Best for: Any team model where design workstreams are formed and handed back to development teams

Key benefits: Focus on delivery and speed.

Considerations: May not allow for discovery/dual-track, mercenaries instead of missionaries.

UX Research Team

Ratio: 1:10 (Researcher to Designers)

Structure Types:

  1. Centralized Model: Dedicated research team serving all product areas, consistent methodology and tooling, strong research practice development.

  2. Embedded Model: Researchers assigned to specific product areas, deep domain expertise, closer alignment with product teams.

  3. Hybrid Model: Core team for methodology and tooling, embedded researchers for key product areas, flexible resourcing for project needs.

Research Focus Areas:

  • Design Research (Evaluative): Works closely with product teams, focuses on usability and design validation, provides rapid feedback loops.

  • Foundational Research (Generative): Explores future opportunities, conducts market and user behavior studies, informs product strategy and roadmap.

  • Research as a Service: Improving research practice within product development teams or models where everyone is a researcher.

Design System Team

Ratio: 1:40 (Design System Designer to Product Designers)

Structure: Typically includes Visual Designer(s) who own aesthetic direction, Interaction Designer(s) who own component behavior, and Frontend Developer(s) for implementation and maintenance.

Key responsibilities: Enable speed and consistency across multiple teams with shared assets and resources for designers and developers, create, document, and evolve components and patterns, support adoption across teams.

Design Operations

Ratio: 1:12 (DesignOps to Designers)

Focus areas: Process optimization, tool management and governance, resource planning and allocation, budget management, team health and engagement.

Implementing a Service Layer

Design-as-a-Service Model

This approach helps teams without dedicated design resources while maintaining quality and consistency.

Key Components:

  1. Clear Request Process: Intake form or ticket system, criteria for what constitutes a design request, SLA expectations.

  2. Prioritization Framework: Impact assessment criteria, urgency evaluation.

  3. Delivery Models: Quick consultations, project-based support, office hours, design reviews.

Research-as-a-Service Model

Similar to design services, but focused on research support for teams without dedicated researchers.

Components:

  1. Research Request Framework: Study objectives and scope, timeline and resource needs, expected deliverables.

  2. Research Methods Library: Templates for common studies, self-service tools and guides, best practices documentation.

  3. Support Types: Research planning consultation, methodology review, data analysis support, workshop facilitation.

Making Resourcing Decisions

Key Questions to Consider:

  1. What is the nature of the work? Customer-facing vs. internal, strategic vs. tactical, ongoing vs. project-based, research needs and timing.

  2. What is the required expertise level? Domain knowledge requirements, technical complexity, strategic importance, research methodology expertise.

  3. What are the collaboration needs? Cross-functional dependencies, geographic distribution, communication requirements, research and design integration.

  4. What are the business constraints? Budget limitations (fixed headcount vs contract), timeline requirements, quality expectations, research tool costs.

Growing Teams

As design teams grow, their resourcing needs evolve:

  • 1-30 designers: Focus on strong core team and basic processes
  • 31-50 designers: Introduce specialized roles and formal processes
  • 51-200 designers: Establish centers of excellence and shared services
  • 200+ designers: Complex matrix organization with multiple service layers