DevGTM Conversations /
How to effectively use Reo.Dev's Custom Activity Score for your GTM motion

How to effectively use Reo.Dev's Custom Activity Score for your GTM motion

July 17, 2025
25 mins
Aditya Ramakrishnan Profile PictureMariam Hazrat Profile Picture
Aditya Ramakrishnan
|
Product Marketing Lead at Reo.Dev
Mariam Hazrat
|
Founding PMM at Reo.Dev
Camera Video Icon - White
Watch on Youtube
How to effectively use Reo.Dev's Custom Activity Score for your GTM motion - Thumbnail
Youtube Red Play Icon
Decorative

Reo.Dev tracks developer activities across different sources. and based on the intensity, engagement, recency and frequency of these activities - Reo.Dev’s AI model calculates the activity score at the account and developer level. Reo.Dev recently launched the custom activity score feature that now enables devtool companies to assign custom scores (on a scale of 0–10) to certain activity types like Github, Documentations, Product, Website, Form & code interactions, and communities. This score influences the overall developer and account activity score. To help teams make the most of this capability - we hosted a demo day session covering some real-world use cases and best practices of configuring custom activity scores in Reo.Dev. 

The session covered some key insights into the custom activity score. Attendees got a hands-on walkthrough of: 

  • Why use a custom activity score? 
  • How to configure custom activity scores in Reo.Dev? 
  • Key aspects to consider to get the most out custom activity score

Today we're talking about the activity score, which is a fairly recent update. It's been around for a while actually on Reo.Dev but we had a fairly recent update to the activity score and we released the ability to put custom weights to the scores. So that the AI model underneath understands what is priority for you, what sources of priority for you, and what to relatively take with less regard. So we'll talk about why to use it, when and how to use custom activity scores. I'll also do a demo on how to change the custom activity scores and then I'll get into some considerations to get the most out of it for your GTM, because obviously, this has cascading impact across the entire GTM pipeline and how prospects and deals flow right up to revenue. So I'll get into how to kind of think about using something like this as effectively as possible.

So what is the activity score? The activity score is a score that operates on two levels. It's an AI-generated metric, which is considered almost like an engagement health check for every developer engaging with your product, and it is a kind of sum total number of all their activities across a product across all the different sources that you're tracking. So if you're tracking GitHub and product and documentation, the score is an aggregation of all that activity at an individual level for developers and then at an account level, there is an account activity score, which is a measurement of developer engagement across all the identified developers for that organization. So that gives you a holistic view of activity at an account level as well. The score is always on a 0 to 100 scale. We use a 0 to 20 for low, medium is 21 to 60, and then high is 60 to 100. And the score is real-time, so if you're looking at a score today for a particular account and it’s 60, and it goes to 100 over a week, that's literally happened right now and that’s a measure of real-time activity on the account.

And what does high, medium, low really mean? In real terms for developers, a high score is probably a developer who's definitely trying to build something with a product because they're really engaging across multiple touchpoints at a very high frequency and intensity. So they may also be advocacy-ready because if they've reached that point, they’ve also been using it well and there is some degree of affinity with the product. So they may be ready to... they definitely have a point of view on your product. At a medium level, they are engaging, it's not casual engagement, but they're probably experimenting with small apps and POCs, but they're still not at the highest level that would probably have been seen relative to others. Low engagement is just probably getting started or initial curiosity and not much that you can conclude from as far as interest or intent.

From a low activity level for a developer, for accounts, high activity is generally a broad spectrum engagement from multiple developers. Many of them are being highly engaged, which is what will drive account activity to be high, and that is probably indicative at an account level that they’re looking to solve for a particular use case. It’s unlikely that an account has multiple developers, all of them active across multiple touchpoints in recent time unless there’s something, there’s a clear and present use case that they want to solve for. Of all the accounts that you have across the spectrum, the high activity levels logically would be the most likely to engage with sales because they have some degree of clarity on how they want to use your product and what they want to solve for.

Moderate engagement probably are accounts which are getting to that point, they’ll need some nurturing. They may not be ready for an active sales conversation yet, but they’re getting there. And low is going to be a case of either it’s a few developers, odd case developers are engaging deeply, which is probably personal interest, or it’s a bunch of developers but everyone's just superficially engaging with the product. So not yet ready to... they're probably not defined the problem or defined how they want to use your product. So they’re probably not ready to engage with sales. But that’s obviously, you would layer on the actual activity scores, actual activity that's going on, and all the other filters as well.

What activities feed into the developer score? So at the developer level, it is a bunch of signals. We track everything in GitHub, we track documentation, we track website pages if you’re tracking any of those, we track product usage like login and key pages within the product, forms and code interaction. So if someone has done a code copy from say your API sandbox or has filled a form to sign up with the product, we would track all that, community mentions, keywords on the community, your own forums like Slack or even say public forums as well. And at an account level, this is an aggregation basically of all the activities at a developer level plus the count of the developers itself. So all things being equal, two developers versus ten developers, ten developers are naturally going to get a higher activity score at an account level. All these are just the sources and the AI, of course, also looks at the depth of engagement, the intensity of it, that is how many pages, for example, on documentation or how many comments have come in on GitHub, things like that. It looks at the frequency, so how often have pages been viewed on documentation or how many logins have happened or frequently logins have happened, as well as recency, right? So obviously more points if someone has logged into the product today versus say a month ago.

And yeah, there’s another filter which we have in the product in Reo.Dev, which is activity search, which literally shows which accounts have gotten a rapid increase in their activity score in recent time. So that’s always a great way to look at your accounts. First filter for high score and then look at those who have surged. So those accounts have gone high, and they’ve done it right now. So there’s a very rapid recent increase in interest from those accounts.

What we have released now is the custom activity score. What this does is it allows you to set the weightage for the individual sources within each of these six activity sources. And what that does is it tells the AI what to take more consider more important and what to consider less important. Obviously, if you set a score to zero, it tells the AI to disregard that source altogether. And if you put it to 10, then it is a relative maximum across all the sources that it's looking at. And the way to use this is basically use it in two ways. Give higher weightage for activities that indicate higher buying intent versus just interest and weight filter out activities that may generate a lot of signals but don’t necessarily lead to buying intent.

So for example, if reading documentation is a high intent signal for you, then definitely give that a high score. But if community activity does not necessarily mean that someone's interested in buying, you can give that a lower score. What that means is the AI will then say that, okay, I need to take documentation, give documentation more weightage for when I build out my final activity score and community less weightage when I build out the final activity score. The goal here is to improve your pipeline. So if you want to weight your pipeline and say, okay, I want enterprise accounts, for example, who will spend more time reading documentation or look at my pricing page or reading my enterprise landing pages and will spend less time on community, where I'll get a lot of individual developers or SMB accounts who will come in, then that's a classic situation where you then weight it saying that let me give more weightage to enterprise signals and less weightage to individual and SMB signals.

The last point here is that any changes take about 24 hours to propagate across your account because it’ll have to be recalculated for every single developer and then again recalculated for every single account as a rollup. So it will take a little time, but once that happens, it’ll be present across the entire account. Obviously, any changes that you’re making happen at the developer activity score, but naturally, as a roll-up, it also affects the account activity score.

Some real-world use cases for this are, like I said, if you are tracking say multiple repos on GitHub, say tracking a couple of your own repositories, a couple of competitor repositories, and say a couple of big open-source repositories, which are foundational to your category, and you want that for intel, you want that to help marketing, help developer relations, but you don’t want scoring from very high activity on complementary repos or major open-source repos to affect your activity score because those repos will obviously generate a lot of activity. So then you can score those down and keep the score for your own repos because that’s actually showing real interest in your own product versus category or halo effect interest. Another good case, especially for PLG products, is if you have a direct self-serve motion and you’ll get a lot of login, but you want to kind of scale weight that down in favor of actual in-product engagement. So then you will give less weight to people who are just logging in but haven’t really progressed with using the product in favor of people who are not just logging in but also using the product with depth.

And like the previous example I was giving, that’s another one where if you want to weight for enterprise signals versus community or you want to weight for SMB versus enterprise or any particular sub-segment like that, which manifests a different behavior in product interactions, that’s another great way to use the weights in custom activity score. I’ll pause here. I’m going to get into the demo, but before that, folks, any questions? Happy to take them.

Okay. I’ll get right back into it through the demo then.

So, if you can see my screen, I’m already logged into an account. This is a test account. So, you’re not going to have really perfect data, so I will put that as a caveat. But right off the bat, you can see that when you look at your accounts, you have the activity score here. An easy way to start with this is just sort by activity score. So you see your highest first, and this is the surge as well that I was talking about. Where did it go? Yeah, here the activity trend is the surge, right? So you can also add a filter and filter for surge, which is accounts that have shown a rapid increase in recent time.

Now let’s see how to actually change the activity scores itself. I’ll go to settings. I will go to configurations. I’ll go to activity score and let’s configure the model. Right. So as I mentioned there are six different source categories. There’s GitHub, documentation, product, website, forms and code interaction, and community. And each of these has multiple things that can be scored. Right? So within GitHub, there’s your own repositories, competitor repositories, complimentary repositories. So you can get into really deep detail on what you want to weight in and weight up. So going back to the previous example, what I was saying about if I want to track all this but I only want to score based on my own repository because that’s what is indicated to me of actual buying interest, I could just zero these out. This does not affect any of the tracking. We’re still going to track all the signals. You will be able to see everything in the activity timeline. It’s just not going to affect the activity score itself.

One thing which I missed is before you change any scores, what I would definitely recommend is generate a preview, right? So like I’m saying, generate a preview. This takes some time. So I have that pre-generated from just before this call. So you will get a preview like this. Ideally, it should be a normal distribution and if you make a lot of changes, this will obviously change dramatically. So it’s always a good idea to take a snapshot of the preview before you start making your changes and then see how much impact you made on the score distribution. Ideally, you should be aiming for a normal distribution whatever the changes you make, right? You don’t want your scores skewed dramatically high or skewed dramatically low, obviously. So that’s one good check that you can do. But yes, getting back into it, so going back to my previous example, if I want to track all multiple repositories, complimentary, competitor-owned, but I only want to score my own repositories, I can just set these to zero and we’d be good to go after that.

I won’t actually save it now because this is an account that the entire team uses, but having done that, if you want to roll it back, what you can also do is just deploy the default model, and it’ll go back to the scores that it had at the beginning. And another example that I can think of is, say over here, in case you want to reduce the weightage for login in case people log in quite frequently but a lot of those logins may be new users, free users who don’t deeply engage with the product yet, then you can give that a really low score so it’ll still be counted, it’ll still contribute to the activity score, but not as much as compared to a crucial screen within the product, which I’ll say, okay, the screens that I’m tracking, I want to give these a really, really high weightage. So then what will happen is that developers who are logging in but doing nothing else will get a lower score than they were say before this change, but developers who are logging in and reaching a particular milestone screen will get a higher score than what they were doing before this.

You can do this across all the different sources and then save and deploy this once and for all. And once you do that, it takes about 24 hours for the changes to be affected on the product.

Yeah, that’s basically the demo. Again, I’ll pause in case there are any questions, and if not, I’ll get into some highly strategic considerations to get the best out of using custom activity scores.

Some activity scores. Just some things to keep in mind is when you’re making the changes, I would suggest always have a clear goal of what you want to change in your pipeline before making that so that you know what to look for and whether that change in activity score has created a desired result for you from a pipeline perspective. One way to think about this is define a hypothesis that you want, which is say, for example, like enterprise accounts are underrepresented in our pipeline. So that’s your issue that you want to solve for, that’s a problem that you want to solve because they generate less community traffic. So community is more biased towards individual developers and not enterprise accounts. They have other ways of evaluating it and don’t really go on forums. And that’s a way to boil down your pipeline issue that enterprise, you want more enterprise accounts into a signal manifestation.

The implication of the enterprise account representation is they don’t get on communities as much. So therefore, what we would do is reduce community scoring for communities and add in increased scoring on key pages that enterprise accounts would look at. So that’s an example hypothesis that you’d want to define before you get into changing the scores. Because then you can change the scores, see what happened to the scoring, see how that impacts your pipeline and the accounts that your team can reach out to and how is that flowing across your deal pipeline and whether that’s actually being useful.

A second consideration is give it some time because we all have a deal pipeline. No deals typically close in less than a month. If your enterprise is going to be even longer, so have some leading indicators like say, what is the volume or the mix of MQLs or the mix of SQLs that I’m getting but also do track how this is impacting finally on your closed won rates, your ACVs, your revenues, all that is obviously going to take some time so if you're making substantive changes, it makes sense to do this once, let the changes percolate right through the pipeline and see if that’s really helping from a revenue perspective before going back and making substantive changes. Little tweaks here and there, of course, don’t matter too much. But if you make sweeping changes across all the scoring, then give it some time to really see the impact right through the pipeline before you get back to the drawing board.

Document your baseline metrics, your baseline pipeline metrics. I mean like, what is your MQL rate, what is your closed won, how much time it's taking, how many deals you're able to assign to reps on a weekly monthly basis, all that stuff before making changes. So then after the changes a month down the line, how much has this helped is a fairly easy thing to measure in that case.

And yes, try and do one hypothesis at a time. We may want to stop tracking competitor GitHub and increase the weightage for enterprise landing pages and reduce the weightage for simple product login which don’t go deep into the product. But if you do all those changes at once, it’s obviously hard to figure out what caused what change in your pipeline. So it’s probably easier to do one thing. Let’s see what the changes are. See what the impact is across your entire GTM motion. Then get into the next change, do that, check the impact, and then consequently keep going forward. The one exception here I would say is if you’re switching off an activity that is basically setting a score to zero. You can do that pretty much at any time. That’s not going to... because you’ll see the impact of that fairly quickly in the activity score.

That’s pretty much all I had. Right on 30 minutes. So a quick summary of what we talked about. We talked about activity scores, which is an AI-powered rollup score of all the activities that a developer does. And then the account score is a roll-up of that at an account level. It’s on 0 to 100 and 60 plus is generally what we quote unquote call high, which is on the highest end of the spectrum and most likely an account ready for a sales conversation. Why to change the scores or why to change the weightage is to align the way the model is scoring and weighting different signals with what aligns with buying intent for your particular product, and how often to do it? Probably like once a quarter or say maybe at max once a month. So you can see the impact across your entire sales cycle and how that's impacting everything.

And change one signal group or one hypothesis at a time. Use a preview feature also to see if the changes you made have really skewed the distribution either very left or very right, then it’s probably... there’s a degree of nuance there that any signal with very high volume like say if you’re almost a public company, like get a lot of traffic then obviously website pages or documentation will have very high volume, so then changes there can have a very big impact on the scoring because the absolute volumes underlying that are high, right? And getting success is really a factor of having a clear hypothesis, making a conservative change, seeing what that impact is, rinse and repeat. Rinse and repeat.

That’s it. So yeah, at this point, it’s over to you guys. Go back, look at your pipeline, see how you can get if there’s something you want to change, what kind of accounts are flowing through and what kind of accounts are being surfaced or highlighted. Develop a hypothesis, plan how you want to measure it, what is your leading indicator, especially what’s your leading indicator? What’s a good metric that you can track say a week or two weeks after the change that tells you that yes, this did work, this has changed things for the better at least to start with and fingers crossed all the way down the pipeline and then implement. Implementation is... you need admin access. You go to configure, preview all your changes and then deploy it. And then just make sure you schedule a follow-up to see what the changes are.

Speaker Spotlight

Aditya Ramakrishnan Profile Picture
Aditya Ramakrishnan
Product Marketing Lead at Reo.Dev

Aditya leads Product Marketing at reo.dev, and works closely with our customers and power users to breathe life into Developer GTM as a practice. He's been a marketer for 12+ years, with multiple stints in developer GTM. Most recently he was VP Marketing at imagekit.io - $5M ARR API devtool, and prior to that led product marketing for Stripe in India, Australia and New Zealand. He has deep experience launching APIs, building full-funnel motion, and driving demand for technical products. From Series A startups to global platforms, Aditya brings a rare blend of storytelling and systems thinking to every stage of the developer marketing journey.

Mariam Hazrat Profile Picture
Mariam Hazrat
Founding PMM at Reo.Dev

Mariam is a part of Reo.Dev's founding team (Product Marketing), where she drives the GTM strategy for new feature launches and shapes the in-product experiences to boost user engagement and adoption. With 4 + years of experience across both vertical and horizontal SaaS marketing, she comes with a hands-on experience across PMM and marketing functions. At Reo.Dev - she owns the execution of end-to-end GTM for all feature launches and builds in-product experiences like onboarding, feature tours & more to drive user engagement.

Related Demo Days

Browse All
Master Enrichment and Buyer Audience tools in Reo.Dev Webinar Thumbnail

Master Enrichment and Buyer Audience tools in Reo.Dev

May 29, 2025
18 mins
Turn Open Source Activity into Revenue: Use OSS activity to discover high-intent accounts in your CRM Demo Day Thumbnail

Turn Open Source Activity into Revenue: Use OSS activity to discover high-intent accounts in your CRM

April 16, 2025
37 mins
Master Buyer Identification & Email Campaigns in Reo.Dev Demo Day Thumbnail

Master Buyer Identification & Email Campaigns in Reo.Dev

March 5, 2025
29 mins
Get started with Reo.Dev
DecorativeDecorativeDecorativeDecorative