Last month I was in a 12-hour hackathon. This time I decided to bump it up to a full 24-hour hackathon. I walked into HackCU with almost no expectations, mostly because I was essentially a one-person team. So instead of trying to solve a complicated challenge prompt, I decided to build something I had always wanted to: a small system that predicts Formula 1 race outcomes for the 2026 season. |
| I drew this somewhere around 2AM |
The idea behind the project, which I called Box-Box (if you watch F1, you know), was to build a model that could take qualifying results and other race-weekend signals and estimate each driver’s probability of winning or finishing on the podium. The system pulls race and qualifying data from the OpenF1 and Ergast APIs, generates features for each driver (qualifying position, practice pace, recent race form, circuit history, reliability metrics, and more), and feeds them into a small ensemble of gradient-boosting models. One model predicts the probability of winning, another predicts podium finishes, and a regression model predicts the final race order.
 |
| Check out the website here! |
This project was interesting because of the constraint: I had about six hours before the first race weekend predictions needed to run, which meant the focus was less on perfect modeling and more on building a pipeline that actually worked end-to-end. The model was trained using time-series cross-validation to avoid data leakage and achieved about 48% winner prediction accuracy in historical validation, which is roughly 10× better than a random guess on a 20-car grid.
Even more fun was watching the system run live once the season started. After the first two races, the model happened to correctly predict both winners and four of the six podium slots, which was a pretty satisfying way to start the season. Of course, Formula 1 is chaotic. Crashes, mechanical failures, and race strategy will always break predictions but that’s part of the fun of building models around real-world systems.
INFO 6500
One talk that completely caught me off guard was by Heidi Biggs, who presented a project called Soft Sound Geographies. Their work combines GIS tools, GPS data from cycling journeys, and personal storytelling to create large quilted maps. Each quilt traces the path of a journey and is layered with projected video composed of field recordings, illustrations, animated clips, and other visual fragments connected to that experience.
I was just blown away because they used data as something to experience. The GPS traces from rides become stitched paths in fabric. The landscapes and moments from those journeys are reflected in textures, colors, and moving images projected onto the quilt itself. The final piece becomes a layered narrative where craft, geography, and personal memory all interact.
Coming from a data science background, I’m used to thinking about spatial data in terms of maps, dashboards, or statistical models. Seeing the same underlying data translated into a tactile medium like quilting felt new. It made me realize how narrow our mental models of “data representation” can be. Data doesn’t always have to live in charts or tables. It can also exist in art, storytelling, and physical craft.
Another part of the talk that stayed with me was the idea of embodied data. The GPS traces represented physical movement through landscapes, relationships with places, and moments in time. The quilt becomes a record of lived experience. It was one of those talks that makes you step back and think about how many different ways there are to interpret and communicate information. Honestly, I walked out of that talk a little stunned. I had never thought about the intersection of data, art, and human experience in quite that way before.
Women in AI Colorado x SheBuilds International Women's Day @ BoulderAround International Women’s Day, I also joined a Women in AI Colorado × SheBuilds event in Boulder focused on building AI-powered applications using a tool called Lovable. The idea behind the session was simple: take an idea and try to turn it into a working prototype in a single afternoon.
Although getting there in person ended up being a bit difficult, I still spent some time experimenting with Lovable on my own. I could quickly move from concept to something tangible using modern AI-assisted tools. Instead of writing every line of code manually, the focus shifts toward defining the problem clearly and iterating quickly on ideas.
I started experimenting with a small idea for a climate-focused tool that estimates the carbon footprint of everyday activities. Things like commuting, electricity usage, or food choices and surfaces simple suggestions for reducing impact. The goal was not to build a full calculator but to explore how an AI interface could make sustainability data easier to understand and interact with.
The event was part of a broader global initiative encouraging women and gender-diverse builders to experiment with technology and build things without worrying too much about technical barriers. I liked that spirit a lot. Sometimes the hardest part of building is simply starting, and environments like this make that first step much easier.
It was also interesting to see how much the developer experience is changing. The line between “coding” and “building” is getting blurrier every year. Tools like this make it possible to prototype ideas extremely quickly, which opens the door for more people to experiment with building products and tools.
Using AI Well and Building a Data Infrastructure to Support it (AI Initiative Conference)
I also attended the Using AI Well and Building a Data Infrastructure to Support It conference at the CU Boulder Law School. The event brought together researchers, policymakers, and industry practitioners to discuss how AI is evolving and what it means for society, governance, and infrastructure.
One of the opening discussions focused on taking stock of where AI stands today compared to just a few years ago. The pace of change between 2023 and 2026 has been remarkable. Models have moved beyond simple text generation toward more complex reasoning, coding, and tool-using systems. Benchmarks that once seemed difficult are quickly becoming saturated, and improvements on many knowledge tasks are happening steadily year after year.
Another interesting point was how the ecosystem around AI has expanded. Open-source models have progressed extremely quickly, often narrowing the gap with frontier models to just a few months. At the same time, the conversation around AI today includes both excitement and uncertainty. There are clear productivity gains and new capabilities, but also many open questions about long-term impacts on work and society.

A recurring theme throughout the talks was the importance of approaching technological predictions with humility. Historically, confident predictions about technology have often been wrong. For example, earlier forecasts suggested that ATMs would eliminate banks entirely, or that certain professions would disappear quickly with the introduction of new technologies. In practice, technology tends to reshape work rather than eliminate it entirely. As researchers and practitioners, the takeaway was to focus less on dramatic predictions and more on understanding real trends and evidence.
One panel that stood out discussed established uses of AI in trust and safety. AI systems are already used extensively to detect fraud, moderate harmful content, and identify child exploitation material online. Organizations such as the nonprofit Thorn were highlighted for their work using technology to help identify and protect victims of child sexual abuse material (CSAM). It was a reminder that some of the most impactful AI applications are often the ones working quietly in the background.
Another panel explored emerging best practices for using AI responsibly. Speakers discussed topics like governance, transparency, and the importance of designing systems that support both innovation and responsible use. One idea that stuck with me was the concept of using AI well. Simply adopting AI tools is not the same as using them thoughtfully. Organizations increasingly need to think about how these tools fit into real workflows, how people are trained to use them effectively, and how decisions around infrastructure, data, and energy use are made.
The afternoon sessions shifted toward the data infrastructure required to support AI systems, especially as demand for compute and data centers continues to grow. Discussions covered how infrastructure planning intersects with energy systems, environmental considerations, and regulatory policy. It was interesting to see how AI conversations increasingly extend beyond models themselves to include the broader systems that support them.
The conference was a good reminder that AI is not just a technical field. It sits at the intersection of technology, policy, infrastructure, and society. Understanding that broader context is becoming just as important as understanding the models themselves.
Break into Tech: LinkedIn Panel
I attended a LinkedIn careers panel hosted by the American Association of Engineers of Indian Origin at the University of Colorado Boulder, and it turned out to be one of those sessions that gives a much clearer picture of what working in tech actually looks like beyond job titles. The panel featured professionals from LinkedIn who shared their career journeys, the different paths they took into tech, and what their day-to-day work involves. Hearing from people in roles across product, engineering, and content made it clear that there isn’t a single “right” way into the industry; most paths are pretty non-linear. One thing that stood out was how much they emphasized learning on the job and being open to evolving roles rather than trying to plan everything perfectly from the start.

I also got to hear from Katie Johnson and Heather Monson, who shared practical insights on early career growth, internships, and navigating the transition from school to industry. The Q&A was especially useful, a lot of questions were around breaking into tech and standing out, and the answers were refreshingly honest. And a nice bonus, I ended up winning one of the raffle prizes, so I now have six months of LinkedIn Premium, which honestly feels pretty perfect timing given everything I’m working toward right now.
Engineering Career Hub 101
I also joined a short session with Laura Garza from CU Boulder’s Engineering Career Hub to learn more about the resources available to students navigating the job search. It was a quick overview of the different ways the Career Hub supports engineering students. From advising and resume reviews to employer connections and career exploration.
I'm starting to see how much of the job search is about learning how to navigate systems. There are many resources on campus designed to help students connect with opportunities, but they only work if you actually use them. Sessions like this are a good reminder that career development is not just about building technical skills, it's also about understanding how to position yourself, communicate your work, and take advantage of the support structures around you.
Building an AI Chatbot for your SaaS app in 1 Day
I attended an online session on building an AI chatbot for a SaaS product in just a day, and it was one of those talks that felt very practical and immediately applicable. The walkthrough focused on using the MotherDuck MCP server to connect an LLM to live user data but in a secure, read-only way. It covered the full system: building a streaming chat backend, handling sequential tool calls, and even integrating custom visualizations directly inside the chat interface. It felt very aligned with how real production AI systems are actually being built right now.
There was also a discussion around multi-tenant systems, especially how to isolate customer data properly using hypertenancy. That’s something that doesn’t always get enough attention when people talk about AI apps. The session made it clear that building a chatbot is all about data access, system design, and making sure everything scales safely across users. I walked away with a reusable mental framework for how to design these kinds of systems end-to-end, which is exactly the kind of thinking I’ve been trying to build lately.
Research and Innovation Office (RIO)As an Impact Intern at the Research and Innovation Office (RIO) at the University of Colorado Boulder, I’ve been working on something that goes beyond just building a project - making sure it actually lasts.
One of the main things I’ve been building is a
website that surfaces funding opportunities for faculty and staff. But pretty quickly, the question became less about building and more about sustainability: how do we make sure this doesn’t disappear after I graduate?
To solve that, I started working with the Center for Research Data and Digital Scholarship and Office of Information Technology to transition the project into a more permanent setup. We’re setting up a GitHub Enterprise environment for RIO and moving the website there, so it can be maintained and extended by the team long-term. This means the project shifts from being a student-built tool to something that actually lives within RIO’s infrastructure. And honestly, that’s the part I care about the most.
I’ve always wanted to build things that continue to exist and create value even after I’ve moved on, something that helps the intended users long after I’m gone. Being able to do that here at RIO feels really meaningful.
Alongside this, I also worked with Dr. Tanya Ennis to organize a workshop on Broader Impacts Budgeting and Financial Planning.
The goal of the session was to make proposal budgeting, especially for broader impacts, more accessible and less intimidating. We focused on helping participants understand how to plan early, what goes into a strong budget, and how financial planning connects directly to the success of broader impacts projects.
The workshop also featured experts from the Office of Contracts and Grants, and included campus resources that participants can use when developing proposals. The bigger idea behind both efforts, the website and the workshop, is the same: making it easier for researchers to access opportunities, plan effectively, and actually execute impactful work.
Wow, time really flew, it’s already been three months as a Data Science Intern at Parsyl. More than anything, this phase has been about growth, both technically and in how I work.
I’ve gotten much faster at debugging things like Airflow alerts, and I’m now working across two teams, which has pushed me to manage priorities better and communicate more clearly. That was something I didn’t fully appreciate before, how much of real work is just coordination, context, and clarity.
One thing that’s helped me a lot is documentation. I started writing everything down, processes, debugging steps, small learnings and it’s made a huge difference. It helps me quickly get context before meetings and makes switching between teams much easier. I didn’t expect it to be this useful, but now I can’t imagine working without it.
What I’ve also really appreciated is the team itself. There’s a lot of collaboration, and people are genuinely kind, patient, and open to questions. Being in an environment like that makes it much easier to learn. One moment that stood out to me was being recognized for ownership in an all-company meeting. I still remember my first all-hands, just sitting there and thinking about how much I needed to learn before I could contribute meaningfully. So hearing that later on, not because I was aiming for recognition, but because the work was actually helping, felt really meaningful. It also pushed me to keep showing up the same way.
I’ve also had the chance to walk senior team members through my debugging approach for Airflow alerts and Airbyte issues, which has been a great way to start building confidence in communicating technical work. On the technical side, I’ve been upskilling on tools like OpenTofu/Terraform, and honestly, learning from the people around me has been one of the best parts of this experience. I get to ask questions, understand how they think, and slowly build my own way of approaching problems.
One session from the Grad + Seminar Series that stood out to me was focused on understanding strengths, challenges, and liabilities but in a way I hadn’t really explored before.
Comments
Post a Comment