Tips to manage ad-hoc requests and keep you focused on the projects that matter
As a Data Analyst, especially in an organization in which the team is not data-savvy yet, we often need to deal with numerous ad-hoc data requests from various stakeholders.
One by one by itself these requests seem small, so you can quickly take a time from your projects to work on them. However, when you combined them and calculate the time spent to work on them, it might be taking a lot of your working time.
These requests, although might not seem much, can keep you away from the other projects that matter — your long-term analytics projects with technical acumen or product features. This comes from the time and resources spent on those work (including taking away your deep work time), not to mention the additional questions/confirmations from the stakeholders upon delivery of the data requests.
These are several tips that I’ve learned over the years to handle these ad-hoc requests, and “work smarter” in some way.
1.1 Understand the data flow and where your deliverables are going
Be curious and find out where the data is coming from. You can start by trying to understand the overall product/business process — the objective of each step, and the input/output of each step. Following that, you can think about data points associated with those steps. And then follow up with the product manager or engineering team to check if such data is already tracked and if yes, where are they stored.
Understanding this data flow will be useful for debugging in case something is wrong with the metrics — or even the product/systems. It will also help in identifying potential important data points that are being missed from tracking.
Not only understanding the data inflow, but it is also important to understand the data outflow. Ask questions on how the stakeholders are going to use the data (i.e for what business decision, or for input to which system, etc) so you can figure out the appropriate deliverable formats for them — preventing them to come back to you for fixing the format!
1.2 Save main queries and keep them handy
Keep your key queries easily accessible. Key queries here are the syntax in which frequently use in fulfilling ad-hoc requests. There is no point in rewriting the same simple queries again and again when you can actually reuse them to shorten the processing time.
Also, for some queries with repeated requests, you can turn them into a query template which some defined variables (i.e for the “where” clause of the query). This can save time for your next query while preventing any syntax errors from repeated operations.
If you want to take it further, you can also create a simple dashboard with interactive parameters. Then you can share this with the stakeholder to play around with depending on their data needs, preventing them to come to you for a common and simple request.
1.3 Make your analysis reproducible
Oftentimes — especially for start-ups — we work with agile methodology consisting of multiple iterations. Such iterations are usually required to evaluate the dynamic markets and adjust with them. It’s not uncommon to have some analysis being rerun after some period of time, be it for update checking or even for other related stakeholders.
Continuing from the previous tip, on top of just saving the main queries you can level it up by making your analysis reproducible. This includes saving all data (or the data source reference, if there is data retention limitation), source code and scripts, and tools documentation that is needed to reproduce the results required.
I usually do my analysis in a Python notebook with sections, notes of the data source, and results. This way, if I need to rerun the analysis again I can just update the variables (if any) and rerun the whole notebook to get results, as opposed to rewriting all the scripts all over again.
2.1 Initiate ticketing system for data projects
Have a ticketing system set up for your ad-hoc requests and data project queues within your project team. This will help in managing the workload, and prevent the pressure of stakeholders sending direct emails or ping-ing team members on the requests. Also, having a ticketing system will allow you to collect some more details on the request to be considered for prioritization.
Some details that can be included in the ticketing form are:
- Requestor’s team
- Details of the request: what are the data needed? In which format (i.e spreadsheet, dashboard, analytics documents, database table, etc)? What/how the data will be used for?
- Priority/time when the data is needed
This way, requests are coming in for more details to reduce back-and-forth communication with the requestor. And the list of projects can be evaluated for the priority and allocation of team bandwidth.
2.2 Batch-working and Time-boxing
I personally find the main issue of ad-hoc requests is the unexpectedness of it. Sometimes it can be quiet, and sometimes it can be over-flooding. When it’s over-flooding there is a tendency to be bugged into working on it mindlessly —just crossing from one ticket to another until they are all completed.
Batch-working is a strategy that I’ve been using to tackle this. It’s grouping some similar tasks (or in this case, ad-hoc requests) to be completed at once, reducing the time and effort on context-switching (i.e analyzing different tables) or moving between tools. This is useful for handling numerous simple tasks that can be combined together.
For making sure I have time for important longer-term projects (and not just dwelling on a pile of ad-hoc requests), I do time-boxing. It is to preplan your schedule, allocate a fixed period to do the planned task, and really do it. When it’s project time, I’ll work on the project and delay any ad-hoc work except if it is on P0.
3.1 Document the key metrics, dashboards, and analytics projects
Documentation is key. It serves multiple purposes from the alignment of understanding (i.e what does this “active user” metric actually mean), quick accessibility to metric values (via the dashboard), and iteration or improvement on analytical projects over time.
For metrics documentation, you will need the following
- Metrics name (i.e active user)
- Metrics description — what is the metric about (i.e the number of users who logged in and have active interaction — click/add to cart/transaction in the website within the last 28 days)
- Metrics calculation — calculation formula, if any (i.e for engagement rate, order success rate, etc)
- Data sources/queries — queries or tables being used to extract the metrics
For dashboards, you will need the dashboard name, description, and the metrics listed. Would be great to have links to the metrics documentation as well.
For analytics projects, the documentation may vary as per each analytics need, but usually, you will have background, objective, hypothesis, data exploration, and conclusion. Make sure to have an appendix section for the queries or scripts created.
With these documents in place, when the next ad-hoc data requests come in, you might be able to initiate a “self-service” mechanism for your stakeholders by sharing these documentation links and not directly rerunning the queries for them 🙂
3.2 Educate your stakeholders
I’ve been working with different kinds of stakeholders, from a technical product manager to a non-technical program manager and operations associate. Regardless of their technical skills, I found that people are willing to learn most of the time. We can utilize this “eagerness to learn” for utilizing the data tools in the organization, establishing the “data self-service” culture to reduce simple ad-hoc requests in the organization.
Our main role is to provide a way to help stakeholders get the required data as soon as possible. We can facilitate this by creating a repository of the dashboards, query templates, and documentation; then publishing them to the stakeholders for use.
The education for stakeholders can start with a video on navigating the repository. Then, moving on to simple utilization of the dashboards, utilizing the filters and interactive parameters to get the data their needs. More advanced education can be a workshop on data sources and SQL/querying courses, but this is not mandatory.
Having this set-up will help both us as the data team and also the stakeholders themselves. As the data team, we can see a decrease in the number of simple ad-hoc requests and let us focus on longer-term projects. As the stakeholders, they learn new skills and will be able to get the requested data faster by themselves (without relying on the data team and the queued ad-hoc requests).