Balancing weights for developer activity score blog thumbnail
Blog /
How to Effectively use Reo.dev's Developer Activity Score for your GTM

How to Effectively use Reo.dev's Developer Activity Score for your GTM

A short Guide to strategically ising Reo.dev's custom Developer Activity Score for your GTM - what to consider and pitfalls to avoid

Guides
Aditya Ramakrishnan Profile Picture
by
Aditya Ramakrishnan
Jun 30, 2025
8
min read
Decorative
LinkedIn Icon White
X-twitter Icon White

Introduction

We recently launched the ability for reo.dev customers to configure their Developer Activity Score (DAS). This article is a simple guide on how to effectively use this feature in your GTM engineering.

What is the Developer Activity Score (DAS)?

The reo.dev Developer Activity Score (DAS) is a personalized engagement metric that evaluates an individual developer’s interactions with your product ecosystem. It provides a numerical (0-100) and categorical (High, Medium, Low) score based on their recency, frequency, and intent of activities.

Activity Score Column Highlighted in Reo.Dev Dashboard

In reo.dev you can build segments to filter for developers based on their activity score, either using a High-Medium-Low filter, or based on the actual score itself.

How Is the Developer Activity Score Calculated?

A developer's Activity Score is based on their interactions with key sources, including:

  • GitHub Activity
  • Documentation Activity
  • Website Activity
  • Form Signups
  • Product Usage
  • Community Activity

Reo.dev's AI scoring system aggregates all these signals, and generates a signal score for each developer along these four parameters:

  • Engagement — What is the type of activity? For example a GitHub star is treated differently from an issue opened which is treated differently from documentation page visit.
  • Intensity — How many unique developers from an organization generated that particular signal? 100 developers generating 1000 pageviews on your docs is different from 10 developers generating the same number of pageviews.
  • Frequency – How often have these activities happened over a given time period?
  • Recency – How recently did the activity occur?

What is the custom Developer Activity Score?

With our latest release, customers can now change the scores given to the various sources which are used by our AI model to calculate the developer activity score.

The sources for which you can change the score are:

  • GitHub Activity - Set individual scores for fork, watch, pull request, star, comment, and opened issue for Owned Repositories, Competitor Repositories and Complementary repositories.
  • Documentation Activity - Set individual scores for documentation pages reviewed as well as for copied content.
  • Website Activity - Set individual scores for website pages reviewed as well as for copied content.
  • Form Signups and Code Product Installations - Set individual scores for form signups for installations initiated and commands executed.
  • Product Usage - Set individual scores within your product for Login activity, Screens visited and Content copied.
  • Community - Set individual scores for posts, reactions, replies, and joined community activities across Slack and LinkedIn.

Read more here on changing the score from within the reo.dev product.

How does the Custom Developer Activity Score work?

The individual customizable scores that you can now set for these different sources of activity in the developer activity score act as an input for the AI in terms of how strongly this should be weighted for your GTM motion.

The parameters that the AI model uses (i.e., engagement, intensity, frequency, and recency) continue to be used by the AI to calculate the actual score.

Examples of use-cases where modifying the Developer Activity score is useful

Track multiple GitHub Repositories, but include only your own in Developer Activity Scoring

If you want to track your own GitHub repository and a complimentary open-source GitHub repository, but you do not want to provide any score for the complimentary repository in your DAS, maybe because the volume of activity is too high.

Then you can set the activity score for your own GitHub repository to 10 and the score for the complimentary repository to 0. This will ensure the AI does not consider activity from the complementary repository for calculating the DAS.

Track product Logins, but give higher weightage to in-product activity

If you want to track signups on your product, and also in-product activity, i.e. - screens viewed, but you want to underweight the impact of product logins in your DAS. This is often the case for PLG motions where you’d want to not consider new signups until they’ve show stronger product usage signals before planning some outreach activity.

For this case, you can reduce the score for “Product Login”, and increase the score for “Screen visited”. This instructs the reo.dev AI to consider the screen visited signal more strongly for your account vs our baseline.

What are the Implications of custom DAS for your GTM motion?

DAS score customization gives RevOps teams unprecedented control over how developer activity signals influence account scoring. This capability represents a significant evolution beyond our AI-driven baseline—one that puts sophisticated intent signal calibration directly in your hands.

But with great power comes great responsibility.

The AI engine powering your baseline DAS already incorporates sophisticated signal processing, temporal decay algorithms, and statistical validation. When you customize scores, you're not replacing this intelligence so much as you're strategically directing it toward your specific GTM hypotheses.

Following are some guidelines for effectively using the custom DAS scores for your GTM motion.

Guidelines for using Custom DAS scores for your GTM motion

Update DAS once per quarter. Maybe even longer if you have longer sales cycles

You need time for the impact of the new Developer Activity Score to percolate through your entire pipeline. You need at least one sales cycle to measure the impact of any scoring changes. The longer your sale cycle, the more time you should give for the impact of the new Developer Activity Score to be felt. Companies with enterprise sales cycles exceeding six months should consider even longer intervals between adjustments.

The temptation to iterate quickly is understandable, especially when early results look promising. But scoring adjustments create complex ripple effects throughout your pipeline. A change that improves MQL-to-SQL conversion in month one might have a different impact onclose rates in month three if it's not given time to stabilize.

Ideally, work on one signal group per iteration to isolate impact.

Adjust GitHub activity weights first, measure the results for a full quarter, then move to product signals. Simultaneous changes to multiple signal groups make it harder to determine which adjustments are driving results.

The exception to this is when you want to “turn off” signals from a particular source. If you have a clear hypothesis that a particular source should not be part of your developer activity score, then you would go ahead and turn that off right away.

Define success metrics and rollback scenarios

Document your baseline performance metrics, define leading success criteria (typically increased no. of MQLs, higher MQL: SQL or SQL:SQO conversion rate), and establish rollback triggers if targets aren't met within 30/60 days.

Create rollback procedures before implementing changes. This includes documenting current weight settings, establishing a monitoring cadence, and a go/no-go decision timeline for rollback.

Have a clear hypothesis on desired pipeline impact before changing Developer Activity Score

Strong hypotheses identify specific pipeline problems, propose causal mechanisms, and suggest targeted solutions. Weak hypotheses ("let's increase GitHub priority to get more developers") lack the precision needed for effective validation and often create unintended consequences

Effective hypotheses follow this format:

"Enterprise accounts are under-represented in our pipeline relative to their conversion value because they generate proportionally less community traffic. Therefore, we'll reduce scoring for community activities, and add increase scoring for key pages tracked for our Enterprise solutions."

Place activity signals at the beginning or end of your scoring waterfall, not in the middle.

If you are sending data from reo.dev into your CRM or Clay and combining the Developer Activity Score with other account scoring techniques (such as firmographics), then the ideal situation should be to have the developer activity score right at the beginning of your scoring sequence or at the very end.

Activity signals from reo.dev represent the most granular behavioral intelligence in your tech stack. Positioning them in the middle of complex scoring waterfalls creates interaction effects that make troubleshooting and optimization much more difficult.

Beginning placement works well for companies prioritizing behavioral intent over demographic fit, i.e. select accounts on activity, and then score a subset on firmographic fit.

End placement suits organizations that want demographic qualification first, then activity-based prioritization, i.e. - first filter your account universe based on fit, and then look at the activity signals for just “high fit” accounts.

Both approaches maintain clear signal attribution and enable easier debugging when scores don't match expectations.

High volume activity sources can be more sensitive to changes

If your documentation pages receive tens of thousands of page views, and your github repository gets a few stars in the same time period, changes in scoring for documentation will impact your overall DAS more than a same change in the score for a GitHub star.

While the reo.dev scoring AI does have “circuit breakers” so v. high activity numbers do not influence the score beyond a point, larger absolute activity volumes still make DAS more sensitive to score changes.

Check the preview distribution before and after changing scores to see the impact

In the settings pages Developer Activity Score, you can generate a preview distribution based on a fraction of the total activity signals for your account. Below is an example:

Preview Distribution Chart for DAS

When making modifications, do check the distribution of score before and after to get a snapshot-based view of the extent of the changes.

  • If you see minimal changes - that may indicate that the score you’ve changed does not have a lot of underlying activity, so is not shifting the distribution significantly.
  • If you see drastic changes, then the situation is reversed. The source for which you’ve changed the score may already have a high number of activities, so as mentioned earlier, is more sensitive to changes.
  • If your hypothesis is something where you expect drastic changes, then go ahead, else rework your scoring.

Exceptional Case

Activity score is only calculated for those accounts where there is at-least one page visit activity in last 30 days or any other activity in last 180 days.

If an account does not qualify by these threshold criteria, then by default, a score of 1 is assigned which may left-skew your actual distribution.

If you see many DAS = 1 scores in your distribution, it may be worth creating a segment which excludes this set of accounts and checking DAS distribution on the remaining accounts.

Conclusion

The most successful DAS score customization experiements share a common thread: less manual intervention, not more.

DAS score customization is a strategic tool for addressing specific pipeline optimizations, not a tactical fix for general lead quality issues. When used with discipline and clear hypotheses, it enables powerful alignment between developer activity patterns and revenue outcomes. When used reactively or comprehensively, it could create more problems than it solves.

Start conservatively, measure obsessively, and iterate patiently. Your future self—and your sales team—will thank you for the restraint that enables sustainable, compound improvements in pipeline quality and revenue predictability.

Convert developer-intent signals into revenue
DecorativeDecorativeDecorativeDecorative