Archive | WAW Recaps

September 2019 Recap – Cluster Analysis with Dr. Michael Levin

Our September meetup featured a strong turnout for the always popular Dr. Michael Levin from Otterbein University speaking about cluster analysis. We’ve checked the records and this was Dr. Levin’s 4th time presenting! An impressive feat which puts him close to the free tote bag for members of the five-timers club. Considering the quality of the content and the great questions it engendered we better start designing that tote bag!

So what exactly is cluster analysis?? “K-means cluster analysis” — it sounds kind of esoteric and difficult, but Dr. Levin showed both how crucial this kind of analysis is and as the ease with which it can be implemented. We might have 10,000 different individual customers, but if we want to actually analyze and then take actions upon those customers we really need to split them up into a manageable number of groups.

Don’t forget that groups of everyone combined or everyone one at a time are still groups, just not very useful ones! Useful groups are the smallest number of groups we can have that split up our set clustered by the dimensions that we are interested in.

Dr. Levin walked us through an example of this kind of grouping with real world data and was brave enough to actually bring up Excel to do a live coding example. Typically that’s a good way to make sure everything explodes, but the only breakage was a few brief projector outages.

He was also kind enough to share both his slides and his Excel templates! The four cluster approach comes from Wayne Winston’s book “Marketing Analytics: Data-Driven Techniques with Microsoft Excel“.

Excel Templates:

Three Cluster Solution Template
Four Cluster Solution Template
Five Cluster Solution Template


This kind of analysis can of course also be done in your statistics package / programming language of choice. We will now provide a couple of links on how it can be done in R or Python to satisfy our toolset “fairness doctrine” requirements, as mandated by the cbuswaw bylaws. As a bonus these also shows just how simple excel can make it!



Please join us next month when Martijn Scheijbeler from RV Share will discuss product analytics!

August 2019 Recap – Testing Strategy with Melanie Bowles

Our August meetup was an excellent session on A/B testing strategy with Melanie Bowles from InfoTrust. To go along with the theme of A/B testing we also stepped up our door prizes and offered the crowd multiple variants of door prizes: including AirPods and wine from campaign tracking service Claravine.

A vitally important but frequently overlooked part of doing A/B testing is the structure behind the testing. You might ask, “how could we possibly need a team of people, launch checklists, test priority queues, and all this other stuff if testing is as simple as ‘just one line of JavaScript on your site'”? Well, it turns out marketing is not always 100% true (shocking news!!) — and while implementing the testing tracking snippet itself might be pretty easy, there are many other steps in the process that aren’t so trivial.

Melanie did talk a bit about how one might evaluate different testing tools, but she was smartly tool-agnostic in her presentation. While there are many great discussions to be had about the different tools and the math behind them, without a good strategy on items like how to generate testing ideas and prioritize running those tests you could have the best tool in the world and your testing practice could stall out and go nowhere.

A key part of the testing process is consistency and replicability. This requires a good strategy thought about ahead of time! Anyone who has run multiple A/B tests will know that actually making decisions from your test outcomes can be hard. Simply running a test without deciding before-hand what your success conditions might be is very tempting, but it’s rarely the case (especially with a mature product) that the results will speak for themselves in a vacuum — and then what do you do?

Melanie recommended using templates to make sure your process is consistent and sustainable, and was kind enough to provide her slides including some example templates!

Please join us again next month back at Rev1!

July 2019 Recap – Presenting Results to Inspire Action

For our July event we had a great turn-out to see Valerie Kroll from Search Discovery teach us about effective presentations. As part of a Search Discovery caravan down from Cleveland for the evening, Valerie presented a consistent strategy on getting the attention of stakeholders to drive action from test results.

The context of this strategy was A/B testing, but the larger points on presentation were totally relevant for any kind of presentation. No matter what kind of results you’re showing we were reminded:

  • To focus on the key results from the perspective of your audience.
  • Even a “failed” project can be an opportunity to learn important things.
  • It’s possible to boil down the results even more than you might think! A two slide presentation can be enough, and after all you are the real conveyor of content, not the slides.

Maybe it’s new to you, or maybe you think you’ve heard this stuff before — but focusing your presentation down to the simplest and clearest version of the results is one of the consistently most difficult (and important) part of our jobs.

A consistent methodology for both the creation of a testing hypothesis and the presentation of results before you start actually running anything can be crucial, but is also a lot of work. Valerie showed us a very useful template for presenting results, and has been kind enough to share her hard work by making the PowerPoint templates available at the Search Discovery site here.

Valerie has also made here slides available!

Please join us next month at Rev1 again for more on testing when Melanie Bowles from InfoTrust will present on building an experimentation strategy.

June 2019 Recap – Mobile App Analytics

For our June event, Mai Alowaish from Blast Analytics and Marketing made the trip down I-71 from Cleveland/Akron to share an information-packed presentation on the many facets of mobile app analytics.

Her presentation covered:

  • An overview of what mobile app analytics is (and how it differs from mobile site analytics and hybrid app analytics)
  • The different underlying types of app analytics: marketing analytics (downloads, shares, deep linking performance, etc.), performance analytics / app health (crashes, errors, latency, etc.), and in-app analytics / product analytics (funnel behavior, personas and demographics, drop-off points, etc.)
  • The myriad different platforms for app analytics — which type(s) of app analytics they cover, as well as what their interfaces look like and enable
  • The different considerations when it comes to how to implement app analytics: to TMS or not to TMS? API hubs? CDPs?
  • How to actually go about planning what to track (see the speech bubble below for a key to that!)

The presentation is available for detailed perusal here.

We had a full house of engaged attendees!

And, as we’ve been doing all year, we had Columbus Web Analytics Wednesday T-shirts as a door prize drawing! One of the lucky winners was actually in town from the Bay Area, so we pretty much assume that “cbuswaw” will be assumed to be a hot new startup inside of a week, and we’ll be fending off venture capital funding offers:

If you’d like to experience the presentation almost as though you were there:

  1. Load up a plate with a few slices of pizza
  2. Get yourself a tasty beverage
  3. Watch the video below that Mai was kind enough to record with her slides and her voiceover!

We’ll be continuing our streak of fantastic content from out-of-town speakers next month when Valerie Kroll joins us to share her tips for presenting results that inspire action. We hope to see you there!

May 2019 Recap – The Path(s) from Data Analyst to Data Scientist

For our May event, we cast our speaker net out-of-state and convinced Jim Gianoglio from Bounteous to make the trip from Pittsburgh to share his experience and his thoughts on the myriad paths that exist for perambulation from “analyst” to “data scientist.” Or, as Jim subtitled his talk: “the transfiguration from reporting squirrel to unicorn:”

It was a packed house for the event, as Jim walked through his various explorations of options for advancing his analytics skills into the world of data science, which he boiled down to three options:

  1. Entering a formal degree program (online or offline)
  2. Relying on the various online courses and content that are available for free or a nominal fee
  3. Attending a bootcamp.

Jim initially dabbled in online courses, but, ultimately, went for a formal degree through Carnegie Mellon. The pros of that approach:

  1. The cost and face-to-face schedule meant that, even as the going got tough, bailing wasn’t really an option.
  2. The in-person interactions with professors and students made for productive collaboration and deeper learning (…including on the subject of — wait for it — deep learning, presumably </editorial license>).
  3. The networking benefits — in a traditional sense, this would mean that Jim was set up to hop to another role following the program, but, in this case, it meant that two of his fellow students got hired by Bounteous!
  4. The cachet of having a Master’s degree from a school like Carnegie Mellon — that’s good for the resume!

Of course, there were also downsides:

  1. It was an intensive and exhausting two years, as Jim continued to work full-time throughout the program, while also having a wife and three young children.
  2. It wasn’t cheap. Jim did the math as to how/when he would expect a return on his investment, and it made sense.
  3. There were still some “dud” professors, which can also happen in the online world, but, when you find yourself calculating a “cost per hour” during a lecture and getting a little steamed, that can be disheartening.

While Jim opted for the in-person, formal degree program, he also discussed — and provided a number of resources — for other options (some of which he availed himself of both before and after his formal coursework):

  • Online degree programs from accredited universities
  • Open courseware and content — use resources like the Open Source Data Science Masters to put together your own curriculum!
  • Bootcamps — although Jim warned that there is an explosion of these being offered, so the quality varies wildly, and bootcamps can make unrealistic claims (“Become a data scientist in just 14 weeks with our bootcamp!”)

Ultimately, there are an overwhelming number of options, which can be intimidating, but it also means that analysts can do some research and introspection and then figure out what is the best option for them!

Ultimately, with a little bit of statistics, some Python, and a little bit of R, you, too, can catch yourself speaking like a data scientist!

Jim shared his slides (with notes) here if you missed the event or attended and would like to reference the material. A smattering of resources he referenced and recommended are:

Join us in June for a discussion of mobile app analytics as Mai Alowaish from Blast Analytics & Marketing shares tips and best practices for mobile app analytics!

April 2019 Recap – The Future of Driving

For our April event we had husband and wife duo Kevin Boehm and Sharon Santino lay out the current state of autonomous driving as well as where we are headed down the road (ok, we promise no more car puns).

If you read some of the tech press or Elon Musk’s twitter feed, then you might think that we’re only months away from just laying back and letting our smart cars do all the work, but that’s not quite the case.

Sharon and Kevin brought our flying smart car dreams back down to earth a bit by explaining many of the challenges involved, but they also showed some of how revolutionary this technology will be when it does eventually fully arrive.

As usual, most of the engineering problems are related to people and their unpredictable behavior. While the cars may be getting smarter and smarter, people will remain people.

They also laid out how it’s not an all-or-nothing process, but much more of a continuum — and while fleets of cars at scale with no steering wheels at all may still be pretty far away, there’s also lots of this technology already out there.

Please join us next month when we’ll have Jim Gianoglio from Bounteous talk about the path from Data Analyst to Data Scientist!


As a bonus, check out the cool time-lapse that Sharon and Kevin made!


February 2019 Recap – What Lies Beneath Sentiment Analysis

Despite the crappy weather, many in the group recognized this event would have been a terrible talk to miss out on!*

If you didn’t manage to make it to our February event with Dr. Marie-Catherine de Marneffe from the Ohio State linguistics department you might wonder why my writing is even (slightly) more convoluted than usual. Those who did attended will certainly recognize this as an example of a sentence that would be judged as positive in sentiment by a human, but perhaps negative by a computer.

Dr. de Marneffe provided the group with fascinating insights about how sentiment analysis engines really work, and also when they might fall down on the job. These systems can be incredibly powerful and useful, but before relying on their output for real-world decisions we should really understand some basics, including:

  • What data the model was trained with. If the test data is similar to the training model (for example determining the sentiment of a movie review with a system trained upon movie reviews), then we might hope and expect to get some pretty accurate results! But take that same classifier and apply it to all the tweets you find about your product and maybe not!
  • What kind of output does the model create? If you’re making real-world decisions based upon what the model tells you, maybe get some more details than happy face vs. sad face? It’s not an all-knowing magic black box and dangerous things can happen when we treat it as such.
  • What are the biases inherent the training data? The decisions made by the system are reflective of the data, warts and all! (Anyone remember the crazy Microsoft AI twitter bot?)

Real world examples of additional language data encoded in non-word form (yes, I mean their hands).

There will be no March meetup, but we encourage everyone to join us at the Women in Analytics conference at the convention center!

In April, please come back to Rev1 to hear Sharon Santino and Kevin Boehm talk about autonomous vehicles.

*So how did our first sentence do when run through the Stanford NLP lab sentiment analysis demo? Well, if you believe the computers this must have been a pretty lackluster event…, wrong again! Maybe that’s why they call it “machine learning”, not “machine knowing”?

January 2019 Recap – Dabbling in Data Science

For our first meetup of the new year we were in a new space (Hopewell) with a lot of new faces! Our speaker wasn’t new though, it was none other than cbuswaw co-founder and data science dabbleR Tim Wilson.

2019 is definitely the 11th year of Web Analytics Wednesdays (we counted), but what else is it? Is it the “Year of Mobile”? Or maybe the “Year of Linux on the Desktop”? Tim declared it to be the year of “Applied Data Science”! Sounds good to us. The mobile thing has already had a few years, and the Linux desktop thing doesn’t seem too likely… so let’s go with it!

But what does “applied data science” mean? Or “data science” for that matter? Rather than debating the definitions for the 100th time or drowning in Venn diagrams — Tim got to what it’s really all about, using data to answer questions.

Tim had somewhere between four and five examples of using R with Google Analytics to give new perspectives into the same old data that we all have and see. Which blog post was the most effective? What are the users on my site really interested in? When is my site most heavily used? Unlikely the definition of “data science”, all of these questions are eminently answerable with the right approach. You can still use a Venn diagram though if you’d like.

Tim’s approach stated no fancy paid GA or BigQuery required, no sites with millions of sessions, but a simple process:

  1. Have a question or idea about your data.
  2. Explore that data.
  3. Use R (and Shiny) to visualize and iterate on your ideas.

Tim also was brave enough to do a real-time demonstration of the R Shiny apps that he has created and made available. All of the code, slides, and links to the apps are available on github here:

You can also watch the presentation itself!

Please join us again next month at Rev1 when Professor Marie-Catherine de Marneffe from Ohio State will cover sentiment analysis, which should fit in nicely as an extension of some of what Tim talked about this month with text analysis!

November 2018 Recap – Data Visualization Tips and Tricks

At our November event, we brought back a past speaker — Tim Wilson from Search Discovery (and the Digital Analytics Power Hour podcast) — and a past topic: data visualization.

The one governing idea that Tim tried to convey (see the recap of Ruth Milligan’s presentation from our August event) was that effective data visualization is not about art or creativity nearly as much as it is about neuroscience. Simply reducing the cognitive load we’re placing on our audience is the best way to get them to focus on the data and message we’re trying to convey. Reducing the cognitive load means simplifying the visualization, and then simplifying it some more!

The slides from his session:

Or, if you think Tim’s dynamic delivery of the material would enhance your review, a video of said delivery (with the bonus of the Rev1 poltergeist glitching the slides regularly throughout the presentation):


The books Tim recommended for attendees to learn more were:

One of Tim’s tips was about building dashboards using Microsoft Excel (using very narrow columns to provide a somewhat flexible layout grid). He referenced that he had also presented in more detail on this topic, and, based on the overwhelming interest* in that material, we’re going ahead and including one of those presentations in YouTube form below:

* One person asked him about it after the presentation.

October 2018 Recap – Call Tracking and Analytics with Alain Stephan

At our October meet-up we learned a lot about an oft-neglected part of that tiny multi-function computational device we keep on us at all times: the phone. You know, that thing you use when you talk to people? Really, that does still happen… A lot.

Alain Stephan, SVP analytics services at call tracking and analytics company DialogTech, showed us how and why we might want to actually pay attention to what customers say when they call, no matter what size our business is.

162 billion calls driven by digital marketing!

Alain walked us through the basic mechanics of how this kind of system works: forwarding, recording, extracting the contents of a call, and then mining that content for points of interest in a customer journey.

We learned that the building blocks of this kind of system have come as far from the days of dial-up Compuserve (their contributions to text-to-speech technology notwithstanding) as a iPhone from an old rotary phone.

Maybe this doesn’t sound like “web” analytics, but those of us that are driving traffic and running marketing campaigns might want to think about what happens when the click we drove turns into an inbound customer call!

A digital analyst might think of the content of a phone call as an “offline” conversion, but if we can extract the contents and customer funnel interactions in that call — that doesn’t sound like something fundamentally much different a series of website interactions to me. And as we learned from Alain, one call can have a series of different interactions just like a site visit. If we’re listening to the interactions in a site visit shouldn’t we be listening (this time, more literally) to the phone call interactions as well?