Matthew Tullman - Senior Director of New Business Development at EyeSee

Behavior decoded: EyeSee welcomes Matthew Tullman

7
minute read
Blog

From the role of emotions in decision-making to the balance between qual and quant and the promise of AI, behavioral research is evolving fast. To explore these themes, we sat down with Matthew Tullman, who recently joined EyeSee as our new Senior Director of New Business Development.

With more than 25 years of experience at the intersection of behavioral science and commercial research, Matthew has built agencies, advised global brands, and turned countless first conversations into long-term partnerships. Born and raised in Manhattan and now based in Hanover, New Hampshire, he blends academic roots in neuroscience and psychology with a passion for connecting insights to impact. Whether it’s packaging, shopper journeys, or omnichannel strategy, he thrives on linking client challenges with smart methodologies that deliver lasting results.

Read on for Matthew’s perspective on some of the most pressing questions shaping the future of insights.

What are the essentials to remember when linking behavior and emotions?

When people ask me about linking behavior and emotions, I always advise not to confuse the signal for the story. Put the same person in a crowded aisle with 30 seconds to make a choice versus a relaxed browse in an online shop, and you’ll see entirely different emotional profiles surface. If you’re not anchoring emotion to the environment and context, you’re misreading it from the start. The real meaning comes when you line up the emotional reactions with what else is happening from a behavioral and psychological perspective. What are the triggers for those reactions? Were they even looking at the brand? Were they about to make a choice? Without that context, you’re just watching a light show but missing the concert.

Diving a little deeper, we need to understand that timing is everything. Watch the sequence. Emotions aren’t static, they ebb and flow. Interest builds, confusion resolves, happiness triggers commitment. A frown that appears before the CTA is seen in an ad says something very different than a smile that lands after the logo is revealed. If you only measure the “snapshot” of how someone feels, you may catch the trailer but miss the movie. Once you anchor emotional peaks and valleys to the sequence of behavioral events, suddenly we can experience the full storyline instead of scattered scenes.

Lastly, don’t hang your hat on one person’s twitch, no matter how strong the one reaction may be. Emotional profiles become powerful when you zoom out and look across people and across moments. If ten people in a row wrinkle their brow at the exact same moment in an ad, you’ve probably got confusion or disgust. If it’s just one or two people? They might just need new glasses. Like my grandfather used to say: “You don’t predict the tide from one wave.”

At the end of the day, it’s definitely important to know what emotions are in the room, but without behavior to tell you where, when, and how often they actually matter, you may be navigating from a weather report instead of a map.

When does quant best upgrade qual (and vice versa)?

I like to think of qual and quant as dance partners. The trick is you must know who’s leading.

When quantitative steps in to augment qualitative, it’s usually because you’ve struck something golden in some deeper discussions with consumers and now you need to know if it’s a shiny nugget or an entire vein. In other words: how big is this thing, how often does it happen, and who really cares about it? That’s when sample size becomes your ally.

Quant also earns its keep when you need to stress-test trade-offs and priorities. People don’t make choices in isolation; they make them under constraint, with competing options and real financial pressures. That’s where tools like conjoint or MaxDiff come in—they let you model the squeeze of reality beyond of the comfort of a theory.

Let’s flip it around. Qualitative comes in to rescue quant when the charts look great but the story is flat. If the numbers spiked or sank, fine. But why? That’s where a well-timed open-end survey question or a post-hoc IDI can tell you whether that spike is curiosity, confusion, or just a glitch. Qual can also keep your quant tools honest. If a KPI isn’t moving, sometimes it’s not the consumer. It might be the way we framed the task or question. And those “weird” outliers? They’re not always noise. Sometimes they’re the early warning system for your next iteration.

So, when you boil it down, I recommend using qual to define the game, quant to keep score, and confidently switching between dance partners without losing the rhythm of the band.

What are the key Dos and Don’ts of applying AI in behavioral research?

When it comes to AI in behavioral research, three clear Dos and three Don’ts immediately come to mind.

First, do train AI on real behavior. That’s what makes the difference between something you can trust and something that’s just a guess. EyeSee’s Predictive Eye Tracking is a fantasic example as it’s built on millions of actual gaze points, not just a saliency model that assumes where people might look. That grounding in reality gives our clients confidence that the predictions hold water.

Second, do use AI to accelerate what matters most. It should help us get sharper answers faster. Whether that’s predicting shelf visibility, stress-testing packaging, or simulating online shopping paths, the goal is speed without losing quality and richness of insights.

And third, do combine AI with context. AI on its own is powerful, but when you embed it in realistic environments such as virtual stores, digital feeds, e-commerce sites, it reflects how people will behave under real-world conditions.

Now for the Don’ts. First, don’t confuse shortcuts with science. Some platforms are built on models that only mimic attention, not measure it. The results look slick but miss the behavioral truth. Don’t assume AI makes research effortless. Faster doesn’t mean hands-off. You still need smart study design and interpretation to get meaningful and actionable outcomes. And finally, don’t forget the human touch. AI is an augmentation, not a replacement. The real value comes when experienced researchers design the studies then take those outputs, interpret them, and turn them into actionable recommendations.

In essence, AI should make behavioral research faster and smarter, but it only works if it’s grounded in real behavior, applied in real contexts, and guided by real human expertise. At EyeSee, AI isn’t a gimmick, it’s a force multiplier delivering speed and scale without losing the human truth at the heart of every decision.

Matthew, thank you for your time and you valuable insights, it's been a pleasure hearing about your thoughts!

If you would like to know more about marrying technology with expertise, read Expert tips to unlock behavioral AI KPIs in pack testing (7 min read) — a practical exploration of how AI metrics and human insight combine to turn behavior into actionable decisions.

Tags
Behavioral insight
Access the latest webinar
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Please fill the form to access the file
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Latest blogs
Go to Blog
Small white arrow icon directed toward north-east
Available for collaboration
How can we help?
Eyesee people
You need to choose an option before submitting.
Small white arrow icon directed toward north-east
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
white x icon on a larger black filled circle