Global HR Navigator
All Articles
Bock’s Data-Driven HRPeople Analytics

Google Built the World’s Most Data-Driven HR Function. Here’s What Bock’s Model Looks Like at 50 Employees.

Google's People Operations team had 400+ analysts and built the most sophisticated HR analytics function in history. A 200-person B2B SaaS company expanding to three countries used four of Bock's principles with zero data scientists -- and the results were more useful than they had any right to be.

13 min readComposite (B2B SaaS)February 24, 2026

Google's People Operations team had 400+ analysts and built the most sophisticated HR analytics function in history. A 200-person B2B SaaS company expanding to three countries used four of Bock's principles with zero data scientists -- and the results were more useful than they had any right to be.


The Situation

In 2022, a B2B SaaS company called Claros (anonymized at the company's request) faced a decision that was, on its surface, unremarkable: where to open its first international engineering office.

Claros had been founded in Austin, Texas in 2018 by two former enterprise software engineers who saw a gap in the market for developer tooling around API testing. By early 2022, the company had 190 employees, $28 million in ARR, and a Series B that had closed eight months earlier at a $180 million valuation. Growth was strong -- revenue was increasing at roughly 60% year-over-year -- but the product roadmap required engineering talent that was increasingly difficult to find and retain in Austin's overheated tech market. Senior backend engineers with Kotlin or Go experience were commanding $180,000-$220,000 base salaries. Claros was losing candidates to Cloudflare, CrowdStrike, and other companies with Austin offices that could offer higher total compensation packages.

The CEO and CTO wanted to open an international engineering office. The candidates were the United Kingdom (London), Germany (Berlin), and India (Bangalore). The reasoning was geographic diversification of the talent pool, potential cost arbitrage (particularly for India), time zone coverage for a product that served customers globally, and -- though nobody said it this directly -- a hedge against the US visa uncertainty that made it difficult to retain several H-1B holders on the team.

The VP of People, a former Workday HR business partner named Lisa Chen, had joined Claros eight months earlier as the company's first dedicated HR leader. Before Lisa, "HR" had been the office manager, the CFO, and a Justworks subscription. Lisa had inherited a function with no performance management system, no compensation philosophy, no workforce planning capability, and exactly zero data infrastructure for people decisions. The employee database was Justworks plus a series of Google Sheets maintained by various people who had, at various times, needed a list of employees for various purposes.

The international expansion question landed on Lisa's desk with a familiar shape: "We need to decide where to open the office. What do you think?"

Lisa had read Laszlo Bock's Work Rules! during her Workday years. She knew the Google story -- how Bock had built a People Operations team of more than 400 people, with dedicated analysts running experiments on everything from interview question effectiveness to the optimal lunch line length. She knew about Project Oxygen, which had identified the eight behaviors of Google's best managers through rigorous data analysis. She knew about Project Aristotle, which had analyzed 180+ teams to identify the factors that predicted team effectiveness. She knew, in short, what a world-class data-driven HR function looked like.

She also knew she had none of it. No analysts. No experiments. No data infrastructure. A 190-person company with a three-person People team (herself, an HR coordinator, and a recruiter) and a CEO who needed a country recommendation within six weeks.

What Lisa did next was more instructive than anything Google ever published. She took four of Bock's principles -- the ones that did not require a 400-person analytics team -- and adapted them for a company one-four-hundredth Google's size. The results were imperfect, incomplete, and directionally more useful than the gut-feel decision the leadership team would have made otherwise.

This is the story of what data-driven HR actually looks like when you do not have data scientists, when your sample sizes are measured in dozens instead of tens of thousands, and when "good enough to be useful" beats "not enough to be rigorous."


The Framework

Laszlo Bock served as Google's SVP of People Operations from 2006 to 2016, a decade during which Google grew from approximately 6,000 to over 70,000 employees. In 2015, he published Work Rules! Insights from Inside Google That Will Transform How You Live and Lead, which remains the most detailed public account of how a major technology company built a data-driven HR function.

The book is frequently cited and rarely applied -- because the version Bock describes requires resources that only a handful of companies on earth possess. Google's People Analytics team had PhD statisticians, ran controlled experiments with thousands of participants, built proprietary software tools, and had the organizational commitment (and the margins) to fund a research function within HR. The insights were genuine. The execution model was non-transferable.

But beneath the Google-scale machinery, Bock articulates four principles that are independent of scale. They do not require data scientists. They do not require large sample sizes. They require a mindset shift: from "we don't have enough data to be rigorous" to "we have enough data to be less wrong."

Principle 1: Let Data Inform Decisions, Not Justify Them

Bock draws a sharp distinction between using data to discover answers and using data to confirm answers someone has already decided on. At Google, this meant running analyses before forming hypotheses wherever possible. Project Oxygen did not start with "we think managers need to be good coaches" and look for supporting data. It started with "does management quality matter at all?" -- a question that Google's engineering culture, skeptical of management in principle, was genuinely willing to test.

At a small company, this principle translates to: collect the data before the meeting where the decision will be made, and present it without a recommendation attached. Let the data create the conversation rather than support a predetermined conclusion. This is a behavioral discipline, not a technical capability.

Principle 2: Experiment, Even at Small Scale

Google ran formal experiments: A/B testing onboarding programs, randomizing interview panel compositions, testing different feedback formats. These experiments had statistical power because of Google's scale.

Bock's deeper point, however, is that the habit of experimentation matters more than the rigor of any individual experiment. Trying two different onboarding approaches with successive cohorts of five new hires each will not produce a statistically significant result. But it will produce observation -- qualitative data about which approach generated more engagement, faster ramp time, or fewer early-tenure questions to the People team. At small scale, experiments produce learning, not proof. That is still more than most companies have.

Principle 3: Democratize Data -- Share What You Know

One of Bock's most countercultural moves at Google was making people data visible to a broader audience than the HR team. Compensation band information, promotion rate data, and attrition statistics were shared with managers rather than hoarded by HR. The logic was pragmatic: managers make people decisions every day (who to hire, how to allocate work, who to develop, how to give feedback). If they make those decisions without data, they rely on intuition and bias. If they make those decisions with data, they are still influenced by intuition and bias, but the data creates a counterweight.

At a small company, this means sharing -- not necessarily all compensation details, but the frameworks, the benchmarks, and the reasoning. When people understand how decisions are made, they trust the decisions more, even when they disagree with the outcomes.

Principle 4: Default to Open

Bock argues that the default in most organizations is information hoarding: data is restricted unless there is a reason to share it. Google inverted this: data is shared unless there is a reason to restrict it. The burden of proof falls on the person who wants to restrict, not the person who wants to access.

For HR specifically, this means publishing the compensation philosophy (not individual salaries, but the principles and methodology), sharing workforce planning assumptions with the leadership team, and making the rationale behind people decisions visible rather than opaque. The "default to open" principle is ultimately about building institutional trust -- a resource that, as Bock repeatedly demonstrates, compounds over time and cannot be purchased.

These four principles -- data-first decisions, small-scale experimentation, data democratization, and default to open -- form the portable core of Bock's model. Everything else in Work Rules! is Google-specific execution. These four are universal.

The question is: what happens when a three-person People team at a 190-person company tries to apply them to a real strategic decision?


Applying Principle 1: Data-First Country Selection

The CEO and CTO had an implicit preference for London. Both had worked with UK-based engineers in prior roles. London was familiar, English-speaking, and culturally proximate to Austin. The CTO had attended a conference in Berlin the previous year and found the engineering talent impressive but was concerned about German labor regulations -- a vague worry based on a conversation at a dinner, not on any specific analysis.

Lisa's instinct was to validate the London preference and move on. She had six weeks. Building a data-driven country selection process from scratch felt impossible.

But Bock's first principle nagged: let data inform the decision, not justify it. If Lisa started from "how do we validate London?" she would find data that supported London because that is what confirmation bias produces. If she started from "what does the data say about all three options?" she might learn something that changed the conversation.

She built a comparison framework using data she could actually obtain in six weeks. No proprietary databases. No consultants. Publicly available sources, recruiter intelligence, and internal data she already had.

The data she collected:

FactorLondonBerlinBangalore
Senior backend engineer salary range (Kotlin/Go)$95,000-$130,000 (GBP 75K-100K)$80,000-$110,000 (EUR 75K-100K)$35,000-$55,000 (INR 28L-45L)
Time to fill (recruiter estimate, senior eng)45-60 days50-70 days30-40 days
Time zone overlap with Austin (CT)6 hours (overlap: 9am-12pm CT / 3pm-6pm GMT)7 hours (overlap: 9am-11am CT / 4pm-6pm CET)10.5 hours (overlap: 8am-9:30am CT / 7:30pm-9pm IST)
Entity setup cost (estimated)$15,000-$25,000$20,000-$35,000 (GmbH formation + Handelsregister)$10,000-$20,000
Entity setup time4-6 weeks8-12 weeks6-10 weeks
Employer burden (taxes + social contributions above gross)~15% (NIC, pension)~21% (social insurance contributions)~12% (PF, ESI, gratuity)
Notice period (senior employees, typical)1-3 months (contractual)1-3 months (statutory, depends on tenure)1-3 months (contractual, commonly 90 days)
Termination complexityModerate (unfair dismissal protections after 2 years)High (requires social justification, works council consultation above 5 employees)Moderate (contractual terms govern)
English proficiency in engineeringNativeHigh (German engineers routinely work in English)High (English is the standard working language in tech)
Existing Claros customers in region340 (UK + Ireland)180 (DACH region)95 (India + Southeast Asia)

Lisa sourced the salary data from Glassdoor, Levels.fyi, and conversations with two external recruiters. The entity setup estimates came from three EOR/PEO providers she contacted for quotes (Deel, Remote, and a UK-specific provider). Employer burden rates came from published government sources. Customer data came from the CRM.

What the data revealed that gut feel had missed:

First, the cost arbitrage for India was smaller than assumed once employer burden, travel costs, and the productivity impact of minimal time zone overlap were factored in. The CTO had estimated Bangalore would save 60% on engineering labor costs. The data showed approximately 50-55% on base salary, reduced to roughly 40% when employer contributions, annual travel budget (Lisa estimated $4,000-$6,000 per Bangalore engineer per year for two on-site weeks), and the overhead of asynchronous coordination were included. Still significant savings, but less dramatic than the headline number.

Second, Berlin emerged as a more compelling option than the leadership team had assumed. The salary range was 15-20% lower than London. The engineering talent pool in Berlin was strong and growing -- Berlin's tech ecosystem had expanded significantly, with companies like Delivery Hero, SoundCloud, and N26 building large engineering organizations. The time zone overlap with Austin was slightly worse than London but still workable with a 9am-11am CT synchronous window. The customer base in the DACH region was growing faster than the UK base (42% year-over-year vs. 28%).

Third -- and this was the data point that changed the conversation -- Claros already had three German engineers on the team. They were employed as US-based remote workers (two on H-1B visas, one on a green card). All three had expressed interest in returning to Germany. If Claros opened a Berlin office, it could seed the office with experienced engineers who already knew the codebase, the product, and the culture -- dramatically reducing the ramp time for a new office. London had no equivalent advantage.

Lisa presented the data in a format she borrowed from the engineering team's decision documents: a structured comparison with no recommendation, followed by a discussion section identifying tradeoffs. She explicitly flagged the CEO's London preference as a known prior and asked the team to consider whether the data changed their view.

After a 90-minute discussion, the leadership team chose Berlin as the first international office, with a plan to evaluate Bangalore as a second office 12-18 months later. The CEO later told Lisa it was the first time an HR presentation had changed his mind about a business decision.

Ross Sparkman's Strategic Workforce Planning framework validates Lisa's approach. Sparkman's five-step methodology -- understand strategy, analyze current workforce, model future needs, conduct gap analysis, build action plans -- is exactly what Lisa executed in compressed form. She did not have Sparkman's full data infrastructure. But she followed his logic: start with the business question (where should we build engineering capacity?), analyze what you have (existing team composition, skills, geographic distribution), model what you need (salary benchmarks, talent availability, time zone requirements), identify the gap (current state vs. each option), and build a plan (the recommendation). The methodology scales down. The discipline does not change.

Applying Principle 2: Experimenting with Onboarding

Six months after the Berlin decision, Claros had ten engineers in the Berlin office -- three transfers from Austin and seven new hires. The onboarding experience for the first two new hires had been rough. Both reported feeling disconnected from the Austin team during their first month. One described the experience as "being handed a laptop and a Slack login and told to figure it out."

The Austin onboarding process was informal and heavily dependent on physical proximity: sit next to your team, absorb context through osmosis, ask questions when you get stuck, go to lunch with your manager. It worked in Austin because the entire company was in the same building. It did not translate to Berlin, where "sit next to your team" meant "open a Zoom call to someone nine time zones away and hope they are available."

Lisa wanted to fix the Berlin onboarding. She did not have time or data to build a comprehensive solution. But Bock's second principle suggested an alternative: experiment.

Lisa designed two onboarding variants for the next four Berlin hires, who were starting in two pairs over consecutive months.

Variant A (structured remote): A documented onboarding schedule for the first two weeks -- specific meetings with specific people at specific times, a daily check-in with a Berlin-based onboarding buddy, a pre-populated list of "first tasks" designed to produce a small, visible contribution within the first week, and a 30-day milestone plan.

Variant B (immersion trip): The same documented schedule, plus a one-week trip to Austin during the first month. During the trip, the new hire would sit with their team, attend sprint planning in person, have lunch with their manager and skip-level, and present their "first task" output to the broader engineering team.

The two hires in month one received Variant A. The two hires in month two received Variant B.

This was not a rigorous experiment. The sample size was two per variant. There was no randomization -- the assignment was determined by start date. A dozen confounding variables (the specific managers, the specific projects, the specific personalities) could explain any difference in outcomes. Bock's Google would never have published this as a finding.

But Lisa was not trying to publish a finding. She was trying to learn something useful. And she did.

Variant A results: Both hires reported feeling "somewhat connected" to the Austin team after 30 days (3 out of 5 on a simple survey). Both made their first meaningful code contribution within 10 days. One reported that the onboarding buddy was the most valuable element. Neither reported feeling they understood the company culture.

Variant B results: Both hires reported feeling "connected" or "very connected" to the Austin team after 30 days (4 and 5 out of 5). Both made their first meaningful code contribution within 7 days. Both cited the Austin trip as transformational. One said: "I learned more about how this company actually works in five days in Austin than in three weeks of Slack messages."

The cost difference was approximately $3,500 per hire (flights, hotel, meals for one week in Austin). Against an average Berlin engineer salary of EUR 90,000, the trip cost represented less than 4% of first-year compensation. If it reduced time-to-productivity by even two weeks -- which the qualitative evidence suggested it did -- the ROI was strongly positive.

Lisa did not declare Variant B the winner based on four data points. She adopted Variant B as the default for Berlin onboarding and committed to tracking the same metrics (30-day survey, time-to-first-contribution) for the next eight hires to see if the pattern held. After twelve Berlin onboarding cycles (eight more hires over the following year), the pattern was consistent: the Austin immersion trip correlated with higher 30-day connection scores and faster time-to-contribution in every case.

Was this proof? No. Was it useful? Yes. Bock's principle, scaled down, produced exactly what it promised: not certainty, but learning.

Applying Principle 3: Democratizing Compensation Data

The Berlin office created an immediate compensation transparency problem. Claros had never published its compensation philosophy because it had never needed one. In a single-office, single-country company, compensation was set through a combination of offer negotiation, annual reviews, and the CEO's judgment. There were no formal bands, no documented methodology, and no published rationale.

When the Berlin engineers discovered -- through the informal salary-sharing that is endemic in engineering culture -- that their Austin counterparts at the same level earned 40-60% more in base salary, the reaction was predictable. Two Berlin engineers raised the issue with their manager within the first quarter. One framed it as a fairness question. The other framed it as a motivation question: "Why would I push for a promotion when my promoted salary will still be less than what a junior in Austin makes?"

Lisa recognized this as a version of a problem Bock had confronted at Google, though at a different scale. Google's solution was radical transparency about methodology -- explaining that pay reflected local market rates, that total compensation included equity that was valued identically regardless of location, and that the company benchmarked against the top quartile in each local market. The system was defensible because it was explainable.

Claros had no system. It had outcomes -- a set of salaries that reflected individual negotiations -- but no methodology behind them. Sharing the outcomes would create more confusion, not less. Lisa needed to build the system before she could democratize it.

Over three months, Lisa built what she called the "Comp Framework" -- not a comprehensive compensation system, but a documented set of principles and benchmarks that could be shared openly.

The framework had four elements:

1. A stated philosophy. "Claros targets the 65th percentile of local market compensation for each role and level. We pay competitively in every market we operate in, benchmarked against companies of similar size, stage, and industry in that market."

The 65th percentile was a deliberate choice. The 50th percentile (market median) would have been cheaper but made Claros uncompetitive in both Austin and Berlin. The 75th percentile would have been expensive and unsustainable at their stage. The 65th was defensible: above average, not top of market, appropriate for a Series B company that offered equity upside and interesting technical challenges but not Google-level cash compensation.

2. Level definitions. Lisa created five engineering levels (E1 through E5) with one-paragraph descriptions of the expectations at each level. These were deliberately simple -- not the 40-page leveling rubrics of big tech companies, but enough to answer "what level am I?" and "what does the next level require?" She reviewed them with the CTO and three senior engineers before publishing.

3. Market benchmarks by location. For each level, Lisa published the local market range (25th to 75th percentile) for Austin and Berlin, sourced from Glassdoor, Levels.fyi, and Mercer data that she accessed through a one-time report purchase ($2,500). She showed where Claros's target (65th percentile) fell within each market's range.

LevelAustin Range (25th-75th)Austin Target (65th)Berlin Range (25th-75th)Berlin Target (65th)
E1 (Junior)$95K-$125K$118KEUR 45K-EUR 58KEUR 54K
E2 (Mid)$125K-$160K$150KEUR 55K-EUR 72KEUR 67K
E3 (Senior)$160K-$200K$188KEUR 70K-EUR 92KEUR 85K
E4 (Staff)$195K-$240K$225KEUR 85K-EUR 110KEUR 102K
E5 (Principal)$230K-$280K$260KEUR 100K-EUR 130KEUR 120K

4. An equity explainer. Lisa documented how equity grants worked -- vesting schedule, refresh grant eligibility, and the rationale for why equity was valued the same regardless of location. A Berlin E3 engineer received the same equity grant as an Austin E3 engineer, which partially offset the base salary differential.

Lisa published the framework in an internal Notion page and presented it at an all-hands meeting. She did not present it as a final product. She presented it as a first version -- "this is where we are, this is how we're thinking about it, here's what we know and what we're still figuring out."

The reaction was not universal celebration. The Berlin engineer who had raised the fairness concern was not satisfied -- he believed location-based pay was fundamentally inequitable and that a global company should pay the same rate everywhere. Lisa acknowledged his position, explained the market-rate rationale (a global flat rate would either overpay relative to local markets in some locations or underpay relative to local markets in others), and committed to revisiting the philosophy annually.

But the broader reaction was notably positive. Several engineers in both Austin and Berlin said it was the first time they had understood how their compensation was determined. Two engineers who had been considering leaving told their managers that the transparency had changed their thinking -- not because their pay had changed, but because they now understood the logic behind it. Trust increased. Attrition in the quarter following the publication dropped to zero, compared to an average quarterly attrition rate of 4% in the prior year.

Bock's principle, adapted: you do not need Google's compensation team to democratize data. You need a stated philosophy, defensible benchmarks, and the willingness to show your work. The framework was imperfect. It was also more than 95% of 200-person companies have.

Applying Principle 4: Defaulting to Open on Workforce Planning

The fourth Bock principle -- default to open -- found its most impactful application when Lisa shared workforce planning data with the full leadership team.

Twelve months after the Berlin office opened, Claros was evaluating whether to proceed with the Bangalore expansion that had been deferred during the initial country selection. The CTO was enthusiastic. India offered a large talent pool, significant cost savings, and the prospect of near-24-hour engineering coverage when combined with Austin and Berlin.

In most 200-person companies, the workforce planning discussion would have happened between the CEO, the CTO, and the CFO -- with the VP of People invited to answer questions about logistics. Lisa, following Bock's "default to open" principle, proposed a different approach: share the workforce planning model with the entire leadership team (eight people, including the VP of Sales, VP of Marketing, VP of Customer Success, and VP of Engineering) and let the data drive the conversation.

Lisa built a simple workforce planning model in Google Sheets -- no specialized tools, no workforce planning software. The model followed Sparkman's five-step framework:

Step 1: Business strategy. Claros planned to grow ARR from $42 million to $70 million over 18 months, requiring approximately 40 additional engineers (based on the existing ratio of engineers to ARR and the CTO's technical roadmap).

Step 2: Current workforce. Claros had 82 engineers: 65 in Austin, 12 in Berlin (two more had been hired since the initial ten), and 5 remote across US states.

Step 3: Future needs. 40 additional engineers, with a mix of senior (60%) and mid-level (40%) to maintain the current seniority distribution.

Step 4: Gap analysis. At current Austin hiring velocity (4-5 engineers per quarter), filling 40 positions would take 8-10 quarters -- over two years. Austin time-to-fill had increased from 45 days to 62 days over the prior year as competition for talent intensified. Berlin was hiring at 2-3 per quarter. The gap between need and capacity was roughly 15-20 engineers that neither Austin nor Berlin could absorb in the required timeframe.

Step 5: Scenarios. Lisa modeled three options:

ScenarioHire 40 Engineers in...Total Annual Cost (Year 1)Time to Full CapacityRisks
A: Austin + Berlin onlyAustin: 25, Berlin: 15$7.2M20-24 monthsAustin market saturation, slower than roadmap
B: Add Bangalore (20 engineers)Austin: 15, Berlin: 10, Bangalore: 15$5.8M14-16 monthsTime zone coordination, entity setup lead time, cultural integration
C: Add Bangalore via EOR (pilot 5, then scale)Austin: 15, Berlin: 10, EOR Bangalore: 5 initial, scale to 15$6.1M (Year 1), $5.6M (Year 2 post-entity)16-18 monthsEOR cost premium for first year, transition complexity

The VP of Customer Success raised a question nobody else had asked: "Our fastest-growing customer segment is in APAC. If we have engineers in Bangalore, can we also hire a customer success team there?" The VP of Sales added: "Our APAC pipeline has 15 qualified opportunities that we can't close because we don't have anyone in those time zones for demos and onboarding calls."

Suddenly, the Bangalore decision was not just about engineering cost arbitrage. It was about APAC market access -- a strategic consideration that had been invisible when workforce planning was locked in the CEO-CTO-CFO triangle. The VP of People had created the conditions for this insight by sharing the data broadly enough for the VP of Customer Success and VP of Sales to connect it to their own strategic priorities.

Holbeche's alignment model explains why this mattered. HR-business alignment is not HR understanding the business strategy and building people plans to support it. It is a continuous loop where people data informs business strategy. Lisa's workforce planning model did not just support the Bangalore decision. It changed the scope of the Bangalore decision by making workforce data visible to leaders who could connect it to market data that the People team did not have.

The leadership team selected Scenario C -- an EOR pilot with five engineers in Bangalore, with a plan to establish an entity once the pilot validated the model. The decision was unanimous. It was also the first workforce planning decision at Claros where every functional leader had contributed data and perspective.


Talent Navigator Lens: Growth Mindset

Lisa's adaptation of Bock's model is a case study in the Growth Mindset domain of the Talent Navigator Lens -- the behavioral competency that measures curiosity, resilience, feedback receptivity, and the ambition to expand beyond established competencies.

What the Talent Navigator Lens would have predicted: Lisa's prior experience was at Workday, a large enterprise with established HR data infrastructure. Moving to a 190-person company with no data capability could have triggered one of two behavioral responses: (a) attempting to replicate enterprise-grade systems at startup scale (overbuilding), or (b) abandoning data-driven approaches entirely because the infrastructure was not there (underbuilding). Lisa's response -- adapting enterprise principles to startup constraints -- reflects the Learning Agility sub-dimension: the ability to rapidly learn what works in a new context rather than importing what worked in the old one.

Failure factor at play: Comfort Zone Preference -- the tendency to stay in the defined lane and avoid challenges. Lisa could have stayed in the comfort zone of operational HR -- processing the international expansion logistics without influencing the strategic decision. Instead, she used Bock's principles to redefine what "data-driven" meant at her scale: not statistical rigor, but disciplined observation. That shift required intellectual courage and a willingness to present imperfect data to a leadership team accustomed to engineering-grade precision.

The high-scoring behavior: An HR leader with strong Growth Mindset would do what Lisa did: identify the principles that transfer across scales, adapt the execution to match available resources, present imperfect data with appropriate caveats rather than waiting for perfect data that never arrives, and treat every initiative as an experiment that generates learning. The Growth Mindset domain does not measure whether you have built a 400-person analytics team. It measures whether you are curious enough to ask the right questions, resilient enough to work with imperfect answers, and ambitious enough to expand HR's strategic influence even when the infrastructure suggests you cannot.

For more on the Talent Navigator Lens behavioral assessment methodology and its application to global HR challenges, see our assessment tools at [Global HR Navigator](/assessments).


Claros's People Analytics in Numbers

Lisa's data-driven approach did not require a data science team. Here is what it actually required:

ResourceCostTime Investment
Glassdoor / Levels.fyi salary researchFree8 hours
Mercer one-time compensation benchmark report (Austin + Berlin + Bangalore)$2,5004 hours to analyze
EOR provider quotes (Deel, Remote, local UK provider)Free (sales process)6 hours (3 calls + research)
External recruiter conversations (2 recruiters, market intelligence)Free (relationship-based)3 hours
Google Sheets workforce planning modelFree12 hours to build, 2 hours/month to maintain
Notion page for comp frameworkFree (existing subscription)8 hours to build initial version
Onboarding experiment (Variant B travel cost per hire)$3,500/hire2 hours to design, 1 hour/hire to track
30-day onboarding survey (Google Form)Free30 minutes to create, 15 minutes/response to review
Total incremental investment (Year 1)~$6,000 + ~50 hours

For context, Google's People Analytics team budget was estimated at $20-30 million annually. Lisa achieved directionally similar outcomes -- data-informed country selection, experimentation-based process improvement, compensation transparency, strategic workforce planning -- for 0.02% of the cost.

The difference was precision. Google could tell you that structured interviews predicted on-the-job performance with an r-squared of 0.28. Lisa could tell you that new Berlin hires who visited Austin in their first month reported higher connection scores than those who did not. Google's finding was publishable. Lisa's finding was useful. For a 200-person company, useful beats publishable every time.

Business Outcomes (12 Months After Berlin Office Opening)

MetricBefore (Austin only)After (Austin + Berlin)Change
Engineering headcount7594+25%
Time to fill (senior eng, Austin)62 days58 days (reduced pressure)-6%
Time to fill (senior eng, Berlin)N/A48 daysFaster than Austin
Offer acceptance rate (Austin)68%71%+3 pts
Offer acceptance rate (Berlin)N/A82%Higher than Austin
Annual engineering attrition16%11%-5 pts
Cost per engineering hire (Austin)$32,000$30,000-6%
Cost per engineering hire (Berlin)N/A$18,00044% below Austin
Average days to first code contribution (Berlin)N/A8 days (with Austin trip)Tracked from month 3

The Compensation Transparency Effect

MetricBefore Framework Published6 Months AfterChange
Quarterly attrition rate4.0%1.5%-2.5 pts
"I understand how my pay is determined" (survey)28% agree74% agree+46 pts
Compensation-related questions to People team (monthly)123-75%
Glassdoor rating (overall)3.64.1+0.5

What Claros Could NOT Measure (Honest Limitations)

Lisa was transparent about the boundaries of her data:

  • Causal attribution: Did the comp framework reduce attrition, or was it the Berlin office excitement, or the new engineering projects, or the market cooling? Lisa could not isolate the variable. She could say all four happened and attrition dropped. She could not say which one caused it.
  • Long-term trends: Twelve months of Berlin data was enough to spot patterns. It was not enough to know if those patterns would hold. The high Berlin offer acceptance rate might reflect the novelty of a well-funded American startup in Berlin's market. It might normalize as Claros became less novel.
  • Counterfactual: Would London have been a better choice than Berlin? Lisa's data suggested Berlin was the right decision given the available information. But she could not know what the London counterfactual would have produced. The three German engineers who seeded the Berlin office were a genuine advantage. If Claros had happened to have three British engineers, the data might have pointed the other way.
  • Sample sizes: Every finding was based on small numbers. The onboarding experiment had four participants. The attrition data was a single quarter. The comp satisfaction survey had 190 responses. None of this would survive peer review. All of it was more useful than the alternative, which was no data at all.

Bock would recognize this as exactly right. The point of data-driven HR is not to achieve social science rigor. It is to be less wrong than the default, which is decisions made on gut feel, recency bias, and the loudest voice in the room.


The Adapted Framework: SMB People Analytics Starter Kit

Bock's model, adapted for companies with 50-500 employees and zero data scientists, reduces to four practices that can be implemented in 30 days with existing tools.

Practice 1: The Decision Data Sheet

What it is: For every significant people decision (hiring in a new country, changing comp structure, reorganizing teams, selecting vendors), create a one-page document with: (a) the decision to be made, (b) the options being considered, (c) the data available for each option, (d) what the data suggests, and (e) what the data does not tell you.

Why it works: The act of writing the document forces you to separate data from opinion. Most leadership teams conflate the two. A decision data sheet makes the distinction visible: "Here is what we know. Here is what we think. Here is where we are guessing."

Tools required: Google Docs or Notion. No specialized software.

Time investment: 4-8 hours per decision, including data collection.

The Bock principle it applies: Let data inform decisions, not justify them.

Practice 2: The Paired Comparison

What it is: When implementing any new HR process (onboarding, performance review format, interview structure), run two versions with consecutive cohorts. Track 2-3 simple metrics. After 6-8 cycles, review what you have learned.

Why it works: It produces directional learning at zero marginal cost. You are already onboarding people, already conducting interviews, already running performance reviews. Running two versions instead of one costs nothing except the 2-3 hours to design the second version and track the metrics.

Limitations to accept: Your sample sizes will be small. Your findings will not be statistically significant. You will not be able to isolate variables. Accept all of this. The question is not "Is this proof?" The question is "Is this better than not looking at all?" The answer is always yes.

Tools required: A Google Form for surveys. A spreadsheet for tracking. Nothing else.

Time investment: 2-3 hours to design, 15-30 minutes per cycle to track.

The Bock principle it applies: Experiment, even at small scale.

Practice 3: The Published Framework

What it is: For your two or three most sensitive people processes (compensation, promotion, performance evaluation), publish the methodology -- not the individual outcomes, but the principles, the data sources, and the decision logic. Make it available to all employees, not just managers.

Why it works: Opacity breeds distrust. When employees cannot see how decisions are made, they assume the worst: favoritism, politics, arbitrary judgment. When they can see the framework, they may still disagree with the outcome, but they trust the process. And process trust is more durable than outcome satisfaction because outcomes change, but a fair process remains fair.

  • Compensation: Philosophy (what percentile you target, which markets you benchmark against, how you handle location differentials), level definitions, and market ranges by level and location.
  • Promotion: Criteria at each level, timeline expectations, who makes the decision, how to nominate yourself.
  • Performance: Evaluation criteria, rating scale, how ratings connect to compensation changes.

What NOT to publish: Individual salaries, individual performance ratings, specific compensation offers. The framework is public. The individual data is private.

Tools required: An internal wiki or Notion page.

Time investment: 20-40 hours for the initial build (across 2-3 processes). 2-4 hours per quarter to update.

The Bock principle it applies: Democratize data -- share what you know.

Practice 4: The Open Planning Model

What it is: Share your workforce planning model with the full leadership team, not just the CEO and CFO. Include headcount projections, hiring velocity data, cost-per-hire by location, attrition trends, and capacity constraints. Update it quarterly.

Why it works: It converts HR from an execution function ("hire these people") into a strategic function ("here is what the people data says about our growth plan"). It also surfaces insights that the People team cannot generate alone -- because other functional leaders bring market data, customer data, and operational data that intersect with workforce planning in ways the People team cannot predict.

  • Current headcount by location, function, and level
  • Hiring pipeline (open reqs, candidates in process, time-to-fill by role and location)
  • Attrition trends (voluntary, involuntary, by function and tenure)
  • Cost modeling for expansion scenarios
  • Capacity constraints (roles you cannot fill, markets where hiring is slow, skill gaps)

Tools required: Google Sheets. One tab per section. Updated quarterly.

Time investment: 12-16 hours for the initial build. 3-4 hours per quarter to update and present.

The Bock principle it applies: Default to open.

The 30-Day Implementation Plan

WeekActionOutput
Week 1Audit your next 2-3 pending people decisions. For the most significant, create a Decision Data Sheet.One Decision Data Sheet used in a leadership meeting.
Week 2Identify one HR process you are about to run (onboarding, interviews, reviews). Design a Paired Comparison with 2-3 trackable metrics.A documented experiment with success metrics.
Week 3Draft your compensation philosophy and level definitions. Review with 2-3 trusted leaders. Revise.A publishable Comp Framework (internal Notion page).
Week 4Build a workforce planning model in Google Sheets. Present it to the full leadership team.A quarterly planning model shared with all functional leaders.

Total cost: approximately $0-$3,000 (depending on whether you purchase benchmark data). Total time: approximately 40-60 hours of People team effort across the month. Total infrastructure required: Google Docs, Google Sheets, Google Forms, a Notion page.


Your Monday Morning

Laszlo Bock built something extraordinary at Google. He also, inadvertently, built a narrative that intimidated most HR leaders into thinking data-driven HR required Google-scale resources. It does not. The principles are universal. The execution scales to any size. Here are five actions grounded in the adapted framework.

1. Kill the Justification Instinct

The next time someone asks you to "pull some data" to support a decision that has already been made, push back gently. Ask: "Can I collect the data before we decide, so it can inform the decision rather than justify it?" This is a behavioral shift, not a technical one. It requires no tools, no budget, and no data scientists. It requires the willingness to let data change your mind -- and the courage to present data that might change someone else's.

2. Run Your First Paired Comparison This Month

Pick any HR process you are about to execute with at least four people (onboarding a cohort, conducting interviews for a role, running quarterly reviews). Design two versions. Track one or two simple outcomes (time to first contribution, candidate conversion rate, satisfaction survey scores). After four cycles, review what you learned. You will not have proof. You will have something better than nothing. That is the standard.

3. Publish Your Compensation Philosophy by End of Quarter

If you do not have a compensation philosophy, write one this week. It does not need to be perfect. It needs to exist. State what percentile you target, which markets you benchmark against, and how you handle location differentials (if applicable). Publish it internally. Share it at an all-hands meeting. Answer questions honestly, including "I don't know yet, and here is when I will." The first company to show its work on compensation earns trust that is very difficult for competitors to match.

4. Share Your Workforce Plan with One More Leader Than You Usually Would

If workforce planning currently happens between you and the CEO, add the VP of Engineering or the VP of Sales to the next conversation. If it happens in the C-suite, share the model with all functional leaders. Each person who sees the data is a potential source of insight that you cannot generate alone. Lisa's most important workforce planning finding -- that Bangalore was a customer success opportunity, not just an engineering arbitrage -- came from a leader who had never been invited to the workforce planning discussion before.

5. Track Three Metrics, Not Thirty

The impulse to build a comprehensive HR dashboard is strong and counterproductive at small scale. You do not need thirty metrics. You need three that you track consistently, update quarterly, and actually use to make decisions. For most companies between 50 and 500 employees, these three are sufficient:

  • Time to fill (by role family and location) -- tells you whether your hiring is keeping pace with your growth plan.
  • Voluntary attrition rate (by function and tenure band) -- tells you whether you are retaining the people you invest in developing.
  • Offer acceptance rate (by role family and location) -- tells you whether your total compensation and employer brand are competitive.

Everything else is useful eventually. These three are useful now.


The Bigger Lesson

Laszlo Bock's Work Rules! is a remarkable book about a remarkable company. It is also, for most HR professionals, aspirational to the point of paralysis. The implicit message -- "look what you can do with 400 analysts and unlimited budget" -- leaves the average VP of People at a 200-person company feeling like they are bringing a butter knife to a gunfight.

Lisa's story corrects this. The principles that made Google's People Operations effective are not technical innovations. They are behavioral disciplines. Let data lead. Experiment constantly. Share what you know. Default to open. None of these require a PhD in statistics. All of them require the courage to operate with imperfect information rather than waiting for perfect information that never arrives.

The adapted framework -- Decision Data Sheets, Paired Comparisons, Published Frameworks, Open Planning Models -- is not a miniature version of Google's People Analytics. It is a fundamentally different implementation of the same principles, designed for organizations where the People team is three people, the data infrastructure is Google Sheets, and the standard for success is not statistical significance but directional usefulness.

Bock himself, in a less-quoted passage of Work Rules!, makes this point: "You don't need to be a data scientist to use data. You just need to be willing to look at what's happening, honestly, instead of relying on what you believe is happening." That willingness -- to look, to count, to compare, to share, to adjust -- is available to every HR leader at every company size.

Lisa did not build Google's People Analytics team. She built something more useful: a People function that used data to make better decisions than it would have made without data. That is the bar. It is achievable. And it starts Monday morning.


Sources & Further Reading

  • Bock, L. (2015). Work Rules! Insights from Inside Google That Will Transform How You Live and Lead. Twelve Books. -- The four principles adapted in this article (data-informed decisions, experimentation, data democratization, default to open) are drawn from Bock's account of Google's People Operations evolution. The book provides the aspirational model; this article provides the SMB adaptation.
  • Sparkman, R. (2018). Strategic Workforce Planning: Developing Optimized Talent Strategies for Future Growth. Kogan Page. -- Sparkman's five-step workforce planning framework directly informed Lisa's country selection process and the Open Planning Model in the adapted framework. Steps 1-5 (understand strategy, analyze current workforce, model future needs, gap analysis, action plans) are the operational backbone of Practice 4.
  • Holbeche, L. Aligning Human Resources and Business Strategy. Routledge. -- Holbeche's continuous alignment loop explains why sharing workforce planning data with functional leaders produced strategic insights that HR alone could not generate. The Bangalore-as-customer-success-opportunity finding is a textbook example of Holbeche's argument that alignment is bidirectional.
  • Dowling, P., Festing, M., & Engle, A. (2017). International Human Resource Management (8th ed.). Cengage. -- Dowling's international staffing and compensation frameworks provide context for the location-based compensation challenge and the country selection methodology.
  • Davenport, T. H., Harris, J., & Shapiro, J. (2010). "Competing on Talent Analytics." Harvard Business Review. -- The original HBR article that defined the talent analytics maturity model. Lisa's approach corresponds to what Davenport et al. call "Level 2: Advanced" -- using data to inform decisions without full predictive capability.
  • Sapient Insights Group (2024). HR Systems Survey. -- Industry benchmark data on HR analytics adoption rates and capability levels across company sizes.
  • Buffer (2013-present). Open salary formula and published compensation data. -- Buffer's public salary approach, while more radical than Lisa's framework, demonstrates the trust-building effect of compensation transparency at small scale.
  • Colquitt, J. A. (2001). "On the Dimensionality of Organizational Justice." Journal of Applied Psychology. -- The foundational research on procedural justice that explains why process transparency (how decisions are made) matters more than outcome transparency (what the decisions are).
  • Meyer, E. (2014). The Culture Map. PublicAffairs. -- Cultural dimensions relevant to understanding why compensation transparency lands differently in different national cultures (high-context vs. low-context attitudes toward financial disclosure).
  • Storey, J. Strategic Human Resource Management: A Research Overview. Routledge. -- Storey's personnel-administration-to-SHRM progression provides context for why the 200-employee stage is where HR either becomes strategic or remains operational.

This article is part of the Global HR Navigator's Framework Case Study series, which applies academic IHRM frameworks to real company decisions. For a companion analysis of how a company's culture exports internationally, see our [Spotify staffing case study](/articles/2026-02-24-spotify-international-staffing-dowling-framework). For the systems-level view of global HR technology, see our [HRIS systems thinking case study](/articles/2026-02-24-hris-systems-thinking-bassett-jones). Subscribe to The Global HR Brief for weekly framework-driven analysis of international workforce decisions.

people analyticsdata-driven HRSMB HRinternational expansionworkforce planningcompensation strategyHR for startups

Get frameworks like this every week

The Global HR Brief delivers one academic framework applied to a current international HR challenge. No fluff. No vendor pitches. Just practitioner-tested analysis grounded in research.