[Webcast Transcript] Detecting the Undetectable: Deepfakes Under the Digital Forensic Microscope

HaystackID
Contact
Editor’s Note: Deepfake technology forces us to confront an uncomfortable truth: our eyes and ears can no longer be trusted in the digital realm. HaystackID’s recent webcast explored how synthetic media has evolved from a technical curiosity into a weapon that threatens corporate security, personal safety, and the foundation of evidence-based decision-making. Digital forensics experts revealed the cat-and-mouse game between creators and detectors, where each advancement in generation tools demands new methods of identification and verification. The real-world cases discussed—from $25 million corporate losses to emotional harm—demonstrate that this isn’t an abstract future problem but a present danger requiring immediate action. Organizations must now build detection capabilities into their standard procedures, training teams to spot inconsistencies that software might miss while developing verification protocols for remote interactions. The webcast provided a blueprint for this defensive strategy, showing how technical analysis combined with investigative discipline can preserve the integrity of evidence and communication.

Expert Panelists

+ John Wilson, ACE, AME, CBE
Chief Information Security Officer and President of Forensics, HaystackID

+ Todd Tabor
Senior Vice President of Forensics, HaystackID

+ Rene Novoa, CCLO, CCPA, CJED
Vice President of Forensics, HaystackID


[Webcast Transcript] Detecting the Undetectable: Deepfakes Under the Digital Forensic Microscope

By HaystackID Staff

Deepfakes challenge one of humanity’s most fundamental assumptions: that we can trust our own senses to distinguish reality from fiction. This AI-powered synthetic media technology exploits our deeply ingrained reliance on visual and auditory evidence, creating convincingly false content that forces us to question everything we see and hear in the digital realm. What makes deepfakes particularly unsettling isn’t just their potential for misuse. Still, how they reveal the limitations of human perception—our brains, evolved to process authentic sensory input, struggle to detect sophisticated digital manipulations that can fool even trained observers. The democratization of this technology through accessible apps means that creating synthetic media no longer requires specialized skills, transforming deepfakes from a niche concern into a widespread phenomenon that touches corporate boardrooms, social media platforms, and personal relationships alike, fundamentally altering how we must approach digital content in an era where seeing is no longer believing.

During the recent HaystackID® webcast, “Detecting the Undetectable: Deepfakes Under the Digital Forensic Microscope,” digital forensics experts examined the critical intersection of deepfake technology and digital forensics and the sophisticated ecosystem behind deepfake creation, where multiple AI tools work in concert to generate increasingly convincing fake content. As panelist Todd Tabor explained, “It’s essentially an AI-generated reality or non-reality, similar to computer-generated images.” John Wilson emphasized the evolving complexity of the tools involved, noting “The thing of interest here is that the tools do evolve… it’s not as simple as ‘Hey, I just go use this one tool and I do it,'” emphasizing how modern synthetic media production requires orchestrating multiple sophisticated platforms.

The trio dove into the alarming real-world applications of deepfake technology, from corporate fraud resulting in $25 million losses during virtual board meetings to North Korean operatives using synthetic identities to infiltrate remote IT positions. Rene Novoa highlighted the psychological warfare aspect of these attacks, explaining, “You can’t trust what you see. What your eye sees and the mind believes,” while emphasizing how victims often pay extortionists not because the fake content is perfectly convincing, but to avoid potential reputational damage. The panelists detailed particularly concerning trends in romance scams targeting vulnerable populations, including teenagers and elderly individuals, where organized criminal teams create elaborate fake personas to manipulate victims, sometimes leading to severe psychological harm and even suicide.

The webcast included practical guidance for detection and prevention, emphasizing that combating deepfakes requires a multi-layered approach combining technical analysis with human expertise. Detection methods range from identifying fundamental inconsistencies in lighting and shadows to sophisticated analysis using specialized software that can detect frame-by-frame physiological changes invisible to the human eye. The experts stressed the importance of maintaining an “investigative mindset,” establishing proper chain of custody procedures, and implementing verification protocols such as verbal passwords for identity confirmation.

As Wilson stated, success requires educating entire teams: “You have to educate your legal teams, you have to educate your investigators, the people doing the work, the ones who are boots on the ground, so that they can look for those things, identify those things, and move that process forward for determining if the evidence is real or fake.” This comprehensive approach ensures that only authentic evidence informs critical decisions in our increasingly synthetic digital landscape.

Watch the recording or read the transcript below to get the full story.


Transcript

Moderator

Hello everyone, and welcome to today’s webinar. We have a great session lined up for you today. Before we get started, there are just a few general housekeeping points to cover. First and foremost, please use the online question tool to post any questions you have, and we will share them with our speakers. Second, if you experience any technical difficulties today, please use the same question tool, and a member of our admin team will be on hand to support you. And finally, just to note, this session is being recorded, and we’ll be sharing a copy of the recording with you via email in the coming days. So, without further ado, I’d like to hand it over to our speakers to get us started.

Todd Tabor

Hi everyone, and welcome to another HaystackID® webcast. I’m Todd Tabor, one of your expert presenters for today’s presentation and discussion, “Detecting the Undetectable: Deepfakes under the Digital Forensic Microscope.” This webcast is part of HaystackID’s ongoing educational series, designed to help you stay ahead of the curve in achieving your cybersecurity, information governance, and eDiscovery objectives. We’re recording today’s webcast for future on-demand viewing, and we’ll make the recording, along with a complete presentation transcript, available on HaystackID’s website at HaystackID.com. Today, we will explore the risk posed by deepfakes to truth, trust, and digital integrity, and how digital forensic experts can identify, analyze, and combat deepfakes in a legal and investigative context. Before we begin the agenda, we’ll conduct some speaker introductions. John?

John Wilson

Great. My name’s John Wilson. I’m the CISO and President of Forensics here at HaystackID, and I’m excited to be here and talk about this topic. It’s an exciting topic and it’s fun to talk about, not necessarily fun to deal with. So looking forward to it. Rene?

Rene Novoa

I think we’re rolling to Todd. But Todd, did you want to give a quick little intro?

Todd Tabor

Sure. I’m Todd Tabor. I’m the Senior VP of Forensic Operations here at HaystackID. I’m also excited to discuss the deepfakes related to this topic. Rene?

Rene Novoa

Yeah. Last but not least, I’m Rene Navoa, the VP of Forensics here at HaystackID. This is one of my passion projects: working in the lab in Chicago on emergent technology, and looking into the future of current challenges and future challenges that we’re going to have to deal with. So again, I have a lot of enthusiasm for this presentation.

John Wilson

Fantastic. Well, let’s jump right in. HaystackID is a data management company. That’s really what we do. We provide very specialized data services. We do data mining and all of the eDiscovery stuff across legal and compliance, and then a lot of expert engagement work. And this covers the spectrum of that. We are going to jump right into it. Why is it important? What do we mean when we say deepfake? So, Rene, what does deepfake mean to you, and why is it important for us to talk about today?

Rene Novoa

Yeah, John. I think we’ll cover the spectrum of what it’s probably meant for and some of the fun experiences you can have. But just as it says, the deepfake is the type of synthetic media created using AI, particularly different learning techniques like GANs. We have the ChatGPTs and the Dollies for pictures. But it’s extremely important because it brings a lot of creativity and ease, and convenience, but there’s a lot of responsibility. With this great privilege and great power comes great responsibility. I know I’m quoting Spider-Man there, but it is a great fundamental idea when we start talking about deepfakes because it is very powerful, and the things that you can do and what you can train AI to do for you to make life easier. But in the wrong hands, it can be a different scenario altogether. In my eyes, and that’s how I see it. I see a lot of good in AI and deepfakes, and I appreciate how they can be used for beneficial purposes, but they can also be very destructive.

John Wilson

Yeah. Well, and you said an interesting term there, which is “synthetic media.” Todd, what is synthetic media? What does that mean? Why is that a term?

Todd Tabor

It’s essentially an AI-generated reality or non-reality, similar to computer-generated images. It’s a video that’s purported to be real, but it’s completely computer-generated. One of the very first videos of a deepfake nature was a video of Barack Obama created by the comedian Jordan Peele, for example, where he created a video of Barack Obama that sounded like Barack Obama but was clearly not Barack Obama. He put it out there, and it fooled a lot of people. It demonstrated the power of the tools currently available on the market.

Rene Novoa

And Todd, that’s the whole idea, is to make it look real. Right? And you have those adversary networks that are competing, one that is generating a synthetic image. And then you also have the part that’s trying to be the destructive one. They’re trying to discredit it. And so it’s trying to find ways to fool itself. Does it pass the test to beat a deepfake? To create that realistic image or that realistic experience? So, the learning model cannot only create something but also has something to test against because you’re testing it against the model. You’re testing against the intro picture, so it can then go ahead and make a more realistic imagery from that context.

John Wilson

It’s also important to note that when you’re talking about synthetic media and deepfakes, they’re not always nefarious. I mean, synthetic media can be a product video or a product photo that was developed using these tools, and there are certainly plenty of legitimate reasons to use them. And so that deepfake name, it is what it is, but it carries a negative connotation. But there are certainly a lot of legitimate uses. There are also many instances where people are trying to deceive others and engage in inappropriate behavior. But synthetic media encompasses all of that. And synthetic media also doesn’t necessarily mean deepfake. Synthetic media can be a video that you took, but applied a ton of filters, cleaned up a lot of the background, and did a lot of editing to it. And that becomes synthetic media because of all of the alterations and activity. The changes that are made within a video, for instance.

Rene Novoa

Yeah, John. I think when you start using the word “deepfake,” I just believe that term right there, you’d think of the dark web. And there’s all that negative connotation around it, but there are so many good uses for it. However, one of the issues is that there are no clear regulations defining what constitutes a deepfake and which tools are effective and which are not. However, we have examples of how to create synthetic media using GANs, Dolly, or ChatGPT, which can manipulate or generate original media. It is not just about fooling someone; it’s also about creating, gathering input, and producing original media and content. Or something that’s going to be more conversational as we talk as humans that ChatGPT uses. We are trying to get away from the connotation. But we also have to know what is out there, how it’s being created, and how it’s being used. And how, as experts, are we going to combat that to tell what the difference is between reality and something that’s synthetically made? And that’s more of a proper term, too.

John Wilson

And so, that’s a great lead in as we draw that line and define the terms “deepfake” and synthetic media. But how are they made, Rene? What tools are being used? What are the commonplace ones? There are a lot of tools out there. So let’s talk about some of the top 10 tools that are out in the marketplace that people are using. Since the tool being used today is being replaced by new advancements, another tool will emerge tomorrow. Then there’s this constant battle, updates, and new models coming out.

Rene Novoa

Well, yeah, you said that the last word was very important. It’s “model,” right? How does it obtain its data, and how does it create new, original media? ChatGPT is designed to create conversational responses that mimic human speech. They’re trying to create original content that mimics human interactions. You also have models similar to Dolly that will work on more imagery. So they have different models. They’re going to specialize, specialize with a lot of great templates. I still believe there are templates involved. Some of them have more information offline, while others use the internet. You have DeepSeek AI, but upon closer inspection, you’ll find some edits and censorship. Depending on the topics you’re discussing, you’ll notice changes. It’ll out with what it thinks should be right, but the model says that certain facts should not be posted out here, and it will go and edit its answers within a certain amount of time. It can be very useful for gaining a lot of valuable knowledge. But it can also be censored. There could be some censorship as it’s being trained. “Don’t talk about this, don’t talk about that.” So again, with all these different models and applications, where is it getting its truth?

John Wilson

The thing of interest here is that the tools do evolve. Like Veo 3, based on Gemini, is the tool of the day. It’s one of the more popular tools, but you also have to understand that people are using multiple tools when they create this. So, part of it involves making the video, creating the actual script, and creating the audio. And multiple, multiple tools are being used at the same time. So, that has impacts, and we’ll explore those later, but it’s not as simple as “Hey, I just go use this one tool and I do it.” Although it can be done, more sophisticated methods involve using multiple tools simultaneously.

Rene Novoa

You’re using certain tools to train other tools and models to obtain that content. And really, what are you giving it access to? With Microsoft Copilot, you are giving it access to a lot of your information and the content you’ve created. So, it can learn where your data is, how to use it, and how it can assist you? You have some of the lower-end options, such as PixFun and Swap AI, which offer a lot of changing faces and creative video-making tools. But there are a lot of templates in there, and those are already created as the models, where you’re just doing cutouts of faces and scans. And it makes it very easy with a lot less effort, but they’re a lot easier to detect because they are cheap and fairly easy to use. So, there’s a wide gamut of sophisticated tools that can be very hard to detect, especially when we start getting into audio, video, and sound, and then other ones that are meant for fun. And where we’re headed is more of the fun things moving along. We had a little fun with this by taking a picture from here and then throwing it into ChatGPT for the action figure. I was creating action figures and different outfits for the challenge “Create an action figure with a briefcase in the word HaystackID.” And I was able to design it by using my language, the way I talk, to create this imagery. And it took a couple of times as I talked through it and learned how I spoke. And putting the words together correctly to build out the imagery that I was looking for, for the response that I was looking for, which was quite funny. And then, I used a different app called PixAI, which could transform your appearance from 10 years old to 80 years old. As fun as that looks, I don’t know if it looks like me or what, but I thought it was funny, John. But there’s just being fun for the Gram as a joke. If you want to make yourself younger, you might post your stuff on social media, in job interviews, or on dating apps, which gives a false sense of who that person is. It’s me, but it can be a different version. And it’s a matter of what reality I’m trying to showcase there. John, even though this is fun, it could definitely be used to set false expectations.

John Wilson

You said some interesting things there. You had to work with your words to ensure they were the right ones to achieve the desired output. And that’s generally that prompt engineering. The prompt engineering is becoming quite sophisticated, and people are learning how to manipulate the results and generate content. We discussed using multiple engines, such as Microsoft Copilot, which is trained on all of our corporate documents. So, it understands our business, who we are, and our company vibe. And it’s able to take that and translate a lot of that into the output script that you get. That’s then; maybe you can take that script and say, “Hey, here’s a video and this is what I want it to say.” You take the output script from Copilot and put it into Veo 3 to generate the video. And then you can take that same script and use ElevenLabs to generate the audio that matches it. There’s a lot of power there, and it does take a lot of understanding. But people are doing some exciting things. I mean, look at that instant bulk-up for Rene there.

Rene Novoa

Yes, and again, that’s fun. But as we start looking into the fun pictures, take a look at the ones to the right with me and Todd. We’re there at the British Consulate in San Francisco. I was able to remove the item and the crest on the wall very easily. Looking at it and the photo below, you can actually see it removed. You can tell something was there. Not a great job. I don’t think I have a lot of talent to do what I did. But if you look a little closer, a little bit subtly, if you look at Mr. Tabor’s hand, right hand on the top picture, look to the bottom, just some slight variations, some things that you can’t tell right away. You could or you could not. It took me a little while to realize that I had done too much editing. But as we go through and look deeper into the photos, we find signs that are very easy to spot, and things that are hard to see. And that’s where it becomes a challenge, it’s so obvious. Like the bulk up from, I’m wearing pants to shorts, and that’s fun. And that’s much more easily generated by AI and some tools. But as we start getting into the photos and audio, and even some other photos or videos, it becomes harder to tell as we spend more time and make slight variations.

John Wilson

Looking at the photo of Todd and Rene, what happens if someone edits it to be at the British consulate instead? You’re now at the Canadian consulate? Changing the naming and changing the badge on the wall, or as Rene did, erasing it. But Todd, can you talk about what some of the obvious tells or the things that we look at to figure out when a photo has been manipulated like that?

Rene Novoa

With the picture of Todd, he has the emblem picture, but below it, the item’s removed, yet the shadows remain. I did a terrible job cleaning up shadows, and we know that something is casting a shadow, but it’s not him, based on the light. Those are some very easy ones to tell, where manipulation wasn’t done all the way or maybe not all the way thought through.

John Wilson

Yeah. Sometimes you have to look at the lighting. Let’s go to the next slide. Suppose you have two different people who were brought into the same video, but they were taken from different photos or different videos. In that case, you might have lighting coming from 90 degrees on one and 120 degrees on the other.

Rene Novoa

And as we were doing this, this was all for fun. We’re showing some of the pictures and discussing the shadows. And we’re talking about the different use cases of changing faces, removing emblems, and we’re talking about being a different consulate. But there’s a lot of abuse that can happen that is much more serious, as many people have heard of the sexually explicit deepfakes. We’ve seen some of those during Super Bowls with famous people, but it can really affect everyday individuals. Especially when they may or may not be true. Being able to replace a male or female’s face on a body that’s just not theirs in a very scathing picture that’s very offensive. And it’s very scary because it’s not a matter if it’s true. Do people believe it’s true? Do they believe that it’s you? That could be compromised. The thought of it possibly being true creates fear and a lot of psychological damage. And that goes all along with impersonating and identity theft. If you’re able to make those pictures and make themI put myself on a body that was very flattering to myself, but in other ways that I could have done a reverse and it would be very unflattering. And that could have been very harmful on this. Like, “Hey, this is what Renee looks like.” And it’s very unflattering. And it could be very detrimental to my mental health and how people perceive me when it’s something that was done… They think it’s funny, but it has a lot of harm in how you use this technology.

John Wilson

You have to talk about the cyberbullying, that sort of stuff. There’s extensive use in that realm, which can be very damaging and challenging for teenage kids. But you also have the financial extortions. People get into a position. They believe they’re interacting with a legitimate business entity, only to discover they’ve inadvertently provided funds to the wrong person due to the fake information and meetings. We’ll explore more case examples shortly, but it’s essential to recognize the great potential for good, as well as the potential for very nefarious activities.

Rene Novoa

Yeah, absolutely. These are some of the things that we’ve been talking about, John—the repercussions of that abuse and of making audio sound like other people. And the financial burden that comes from the extortion of, we’re going to get into some of the examples of how that’s being used. But just the distress, psychological distress that… It’s not the technology. It’s how it’s being used and being unable to trust. You can’t trust what you see. What your eye sees and the mind believes. And when you see that and you see that it’s you or it’s about you or your loved one, it creates a lot of panic and distress and anxiety. And so just the psychology of thinking of this photo, this voice, somebody impersonated me by using my likeness, my imagery, can cause a lot of shame and humiliation. It can cause depression and a lot of other health problems. Not to mention the reputational damage, even if it wasn’t real, and it turns out to be fake. People have already seen this, and you’ve already heard you do this. Whether it was a trick and you trick somebody into doing something, using somebody else’s voice, and using their co-worker to play a joke. And I’m taking this really lightly, as we were going to talk about some serious things in a few seconds. But just the whole trauma that these people go through in isolation. Because they’re scared. How do you prove it? Especially if you’re not in this industry or a professional, you won’t be able to point out those things. If you are, I don’t want to say a regular Joe, but someone who’s not into technology, that doesn’t know how to say, “This is a fake. This is not me.” And it’s just your word, and you have nothing to back you up.

John Wilson

Todd’s still having some technical challenges today, but we’re going to start getting into the meat of it. And I think “trust” is the keyword when you’re doing all of this stuff. Can you talk us through that, Rene?

Rene Novoa

We were discussing this before, particularly when someone creates something that’s been manipulated, such as synthetic media. How do you trust the media out there? How do you trust something that you can’t put your hands on and look at it? So we already have trust issues with logging on. We have 2FA, Least Privilege, and other security measures in place for access. Now we need to learn how to enable 2FA or secondary authentication. We start looking at hearing audio and things when we get phone calls, and they say they’re from your grandparents, your parents, or their children. They’re in distress. And how do you trust that voice to say that they need money wired, to be sent over? Or that assistant, secretary, or financial HR person who’s being instructed to send or wire large amounts of money? How do you trust that text or trust that voice? We will come up with some key takeaways on how to combat that. But trust is a big thing. Manipulation of trust is another form of this type of deepfake. It’s not just about photos, but also about our psyche and psychology in how we perceive media and the sounds we hear. Just let alone, this is a number. I think we’ve discussed the cost of being at a 200 million-plus loss in the first quarter due to deepfakes and able fraud. Because people do not trust the right people and have the wrong trust. We were discussing grandparents, the audio, and sending money because they were instructed to do so. But it wasn’t the right individual or the proper case. This trust is a huge part of what we see and what we hear. And this is going to be the problem, or the challenge, that we’re going to have with synthetic media. So, unlike deepfakes, synthetic media across the board is going to be that challenge.

John Wilson

Yeah, we talk about the 200 million-plus loss. And that number is actually quite a bit higher, but that’s what we know and can confirm. There was one case where a Chinese company had a US CFO who received an email. “Hey, we need to execute a wire transfer.” Very substantive, $25 million. And he says, “Well, this feels like a phish. It doesn’t seem legit.” So he reaches out to his IT, says, “I got this email. It doesn’t seem legit.” They start their process. In the meantime, someone sets up a board meeting to confirm the transfer request with the board members, conducts a virtual meeting via Teams or Zoom, and sets up a follow-up meeting. Gets on a call with five board members who all confirm that the transaction is legit, that it needs to happen. There’s an urgency to get the deal done. And winds up transferring 25 million after getting on this video board call. Come to find out that all of the board members were deepfakes. They were very well-trained. They talked in the right local dialects. And so, the people had a southern twang and spoke with one. The dialect was right; the vernacular and the words they were using were all right. The models were very well-trained and convinced him to proceed with the transfer, which cost the company $25 million. That’s some very real and significant impact.

Rene Novoa

You even trusted what you saw and what you heard, but it wasn’t real. Right? And those are the things that we’re seeing. We’re seeing, as you said, that it was very sophisticated and well-trained. We’re seeing a lot of nation-state threat actors that have put a lot of work in. Twenty-five million dollars is a great incentive to invest a couple of months in training and thoroughly setting up the scam at the most opportune time. That was just one example. John, thank you. We’ve seen voice cloning, such as Marco Rubio and Susie Wiles, where Marco Rubio was calling other nations and leaving voicemails and text messages that were not his, but had the likeness of his voice. Luckily, nothing major came of using his voice. But, especially at this time, it can be very detrimental globally to the impact of the United States and other countries, and how things are perceived. These are very serious examples. It was fun creating fun videos and shirts, and then we started getting into some serious business, dealing with a $25 million loss. We have the voice cloning, the Singapore blackmail, and the blackmail ministers from Singapore. And I don’t know if many people are aware of it, but there were various individuals blackmailing cabinet ministries with synthetic photos of them in some very telling positions. They were in some very compromised positions that the cabinet minister knew were fake. They could almost prove that they were going to be fake. But if they were leaked, the cultural shame and reputational damage would be significant, and they would end up paying money to prevent it from being leaked. To a certain point, they had to go ahead and try to do an investigation and stop the blackmailers. But initially, from what I read, that money was paid because some of the cabinet ministers did not want those photos out, even though they knew it was fake. They were being blackmailed. But just the whole psychology, the whole anxiety, the depression, the shame that could come with just a photo being sent out to your family and friends, regardless of whether it’s true. Truth had nothing to do with it; it was more a matter of perception. John, go ahead and talk about North Korea, if you’d like. With North Korea, you and I have talked about this at length. According to the story, over 300 firms have hired synthetic individuals into their organizations as remote workers. All deepfake interviews where the video was turned on, including their resumes, interactions with IT jobs, and getting behind the firewall. And these individuals end up being run by North Korean operatives. They had a different, another individual. They had the voice of other people. They were getting paid a salary and were now behind the firewall. So when we started looking around, I’m here at Black Hat, and we’re talking about security. We have multiple DLPs and processes in place to stop intruders, but how do you slow down people trying to access your network? But when you have it, it simply allows somebody into the house to work from inside against you; it becomes a lot harder to detect. And it’s simply just getting the interview process and being hired all through using synthetic media, voice, pictures, likeness to do some horrible damage.

John Wilson

It is interesting because they’re getting hired into legitimate jobs. They’re doing legitimate jobs, but also gaining access to many things that they shouldn’t have access to. Because they’re not actually who they say they are, the individual the company has hired isn’t the person they think they hired. There are multiple fronts to those attacks and approaches because, in many of those instances, there are various approaches. A woman from Texas had a server farm or laptop farm running in her house. She had dozens and dozens of laptops representing individual employees at various companies, where fake people had secured jobs and created fake identities to receive pay intended for the person in Texas. She was then receiving fake pay and transferring it to these bad actors, who had essentially put her in a very bad position through similar fraud methods. And so these are very complicated attacks. They’re very sophisticated, and they were actually doing the jobs they were hired to do. So there was no tip-off there. The people were technically skilled at accomplishing the tasks, but you now have a bad threat actor who’s setting up the security posture at the company. It’s not the person they claim to be. So now they’ve set up the security, but they’ve also left themselves access to various things they shouldn’t have had access to.

Rene Novoa

It all started with what I believe were legitimate temp agencies that were created fraudulently. So, many of these companies had faith because they had skilled workers. They were able to produce people who could do the work. But then, when we’re getting into IT, you’re getting people who can provide access to elevators, passwords, credentials, and support, while also learning about the organization. So it’s very, very sophisticated. And as you go down that rabbit hole, it becomes very, very scary. And to wrap up this discussion on these scams, here are some statistics we’ve compiled on romance scamming. We’re talking about a significant amount of money being used by threat actors. But the romance scams, we started seeing some statistics. Teen boys, aged 13 to 19, and the elderly, aged 70-plus, were the highest and most significant demographic targets. One of the scary things I’ve learned about teen boys, and also girls, is that they’re not taking on these scams and threat actors one-on-one. From what we’re learning, these are teams of individuals focusing on one child or one individual. So, while one person is entering a romantic relationship with someone, others are learning about their parents, their workplaces, and their schools. They are giving the email addresses of their teachers and their employers. By the time the romance scam has run its course, the threat actor has likely learned a great deal in the background and may have generated fake images or voices. Whatever they’ve set up to make them believe that this is a real person. Maybe that individual sends something they shouldn’t have or shares imagery they shouldn’t have, and now they’re going to get someone who’s fake. And they have all this information. Now we’re talking about shame and embarrassment and depression. And there’s a high rate of suicide, especially with boys, because of that shame, because of what they’ve done, and they’ve been tricked. And if anyone has a teenage son or daughter who knows everything between those ages, it is difficult because they know everything. And I think that’s when we’re seeing a lot of damage with these. And it’s not fair when it’s five adults, five people that do this for a living, as a business, to attack the most vulnerable people in our community.

John Wilson

Yeah, absolutely. So let’s get into the meat of it. Let’s talk about how you start identifying them. What things do we do to identify what a deepfake or synthetic media may be? Can you walk us through some of that, Rene?

Rene Novoa

Yeah, John. This is something we’ve touched on: the lighting and shadows, as well as some of the background distortions I’ve shown in some of those pictures. As we dig deeper, we notice visual things that we can point out. We try to walk through them, including the lip-syncing, the shirt coming off, and the changes in the pants and body texture. Those are some of the obvious red flags for practical detection. But we have to dig deeper, definitely. As you discussed with the board members, the issues will extend beyond just lighting, shadows, and inconsistencies. But it’s getting to something that Todd had mentioned, of compression. We’re looking at the different layers of the coloring, patterns, and pixels, as well as how things are used and where the light is coming from. It takes more than just the naked eye and a review of a document or some media to truly bring out the details. Is this authentic, or has it been manipulated? And you don’t want to call something a deepfake that just removed a bird, or maybe a person that was in the background of a wedding photo that you wanted to move your Uncle Al out of the background. I wouldn’t call that a deepfake or synthetic media, but those are the types of tools you have to understand. Has there been a cut? Has there been an add? And then trying to put it all together. What changes could have happened to this document or this media, this video, this photo?

John Wilson

These are the obvious things you can look at; some of it gets super sophisticated. Look frame by frame at the video to understand. Is the heart rate of the individual consistent? There are minor skin tone changes that occur, which can be detected programmatically but are less noticeable to the human eye. Those sorts of things can help hone in on, “Hey, is this a legitimate piece of media or synthetic media, or is this a deepfake or has it been altered for any reason? Sometimes it’s not a nefarious reason; it’s just a product video, for instance, that gets altered in a way to be incongruent with the company and causes business damage. So you do have to look, “Hey, are the shadows right?” And in all the objects in the video. And then also looking at, as you talked about, the compression and the heat map, and looking at that frame by frame to determine if there are artifacts in the pixels? Not necessarily visible, but within the pixels, you can see shading changes where there might be a section that has a sudden cut to two or three shades different. That’s not detectable by the naked eye, but it is certainly detectable through programmatic means or by using software and tools.

Rene Novoa

Yeah. Well, the keyword that you said in all that, I mean, I took a lot away from that, John. But I think “investigation.” I believe you had to look at all this stuff, rather than just saying one thing. You are pretty comprehensive of the heat maps, the noise, and the detection. And that’s the role that, as we move forward, we should have in your organization or be partnered with people who are conducting investigations in this area. Because it’s more than just identifying that something’s been lost or changed, we can’t do this by ourselves. We have some great tools, but we are also using AI to help us. We start looking at pixelations, shading, and other things. We talked about compression. Why is that important? When we see the original video with multiple compressions, that means things have changed. It’s been re-saved. Why was that re-saved? Why is the compression so important? Now that we know the video or that thing has been re-saved before, why is it still there? What changes? What occurred for them to re-save this imagery? So, it’s about knowing those little things and having an investigative mindset to understand. How many times has this been saved? What has been changed? And even if it’s minor, maybe it’s major, maybe it’s not. We can train AI tools and create some of these tools for recognition as we build a database. We’re still very early on in this investigative process of our standard. And you’re going to start seeing a lot of templates to help identify. But even as the AI improves to try to circumvent us, those are going to be the challenges.

John Wilson

We received a question from the audience about what people do in this situation. The person continues to say that their law firm only conducts in-person interviews. But some companies are fully remote and no longer have physical presences. So what do they do in those circumstances? How do you recommend moving forward with just hiring, for instance?

Rene Novoa

There should be multiple interviews and getting eyes on the individual. Even though things are remote, we are doing as much verification as possible because social media and LinkedIn profiles can be manipulated and generated. So, it’s essentially adding multiple layers of checkpoints, in my opinion, John. Even here at HaystackID, there are some hiring methodologies. Get different individuals, trying to get a feel, and trying to see how they speak? And not just tech questions, really, about what they do? Lifestyle. It’s very interesting to hear them talk in a professional way and see how they change. We started looking at AI models. There are certain things they can’t do; they can’t fake, especially in their responses.

John Wilson

Yeah, exactly. And it does come down to asking some technical questions—the commonplace things, as well as some logical reasoning-type questions and puzzles. There are several ways to do this. Tools are available to help validate that a video, such as a team video with someone who isn’t an avatar but an actual person.

Rene Novoa

No backgrounds are good. Having no green screen, seeing the individual’s background if possible, and other little details like that. It’s just adding more roadblocks to the individual. So that you can’t be entirely sure, you can still get everything. But we can narrow down our hiring to ensure we’re not bringing in IT professionals who could do harm. But that all comes to this point is authenticity and admissibility. As we conduct these investigations, we are part of the hiring process. How do we authenticate who they say they are? How do we authenticate that this video is real? Even when I submit real documentation, how do I convince the courts or the legal team that this is a real photo? This is an authentic picture, taken either from a camera or directly here. I can also prove that, despite these fears, the imagery is authentic and not a deepfake, providing a real report of what I’m saying is true. Which we come in with block chaining and time dancing. We have all these things in place, but we need to continue developing more tools and challenges, or new tools to address these challenges as they change.

John Wilson

Some in the software industry are moving towards a process that incorporates authenticity and validation efforts into the videos created on their platforms. So there’s some of that going on. But when you have to deal with, “Hey, is this video real or fake?” The goal is to capture the closest or best available source. Especially if you have two versions of a video, it’s best to capture both. Making sure that we’re getting the conversation with the individuals involved, establishing the chain of custody. We’re validating the tool and the videos themselves, and talking to the people involved to understand where they may have been altered or not altered. And using that information to be much smarter about, “How do we get deeper into it? How do we help validate that video?” And the interesting thing is, we talked about synthetic media. “The authenticity is no longer assumed” is a great statement because when you take a photo with your phone and post it to Twitter or Facebook, the photo gets altered. Facebook removes some of the metadata to prevent your location information from being posted, or it uses a combination of different elements. There’s a lot of metadata in videos, including a lot of EXIF data. And then many of the tools modify some of that. And a lot of the tools nowadays, like when you take a picture and you want to post it, you can say, “Hey, yeah. Remove the tree in the background or the flower, or add a bird.” Or very simple things. But that’s still also synthetic media. It has been altered. It has been manipulated. And so a lot of the time, just taking a picture with your phone and posting it directly to a website can alter that data, and it does become synthetic media at that point. Understanding the impacts of that is important. And you have to educate your legal teams. You have to educate your investigators, the people doing the work, the ones who are boots on the ground. So that they can look for those things, identify those things, and move that process forward for determining, “Hey, do we have real evidence here? Do we have fake evidence?” And making sure that we’re not relying on fake evidence. Because again, it’s not just pictures of people and birds, it’s actual chats. It could be contracts. It could be all sorts of documents that are put into the video. It can be a video capture of a chat conversation or a video presentation like this, and somebody can go in and alter it to say, “Hey, John said that you don’t need to authenticate things,” or similar comments. You also have to talk about, “How do you protect your organization? How do you deal with it in a litigation?” Or that sort of aspect. But also, how do you protect your organization from deepfakes? Having an organizational deepfake password can be important. Have your family and friends create a deepfake password. That way, if someone asks for a key action or information, you can respond with, “Hey, what’s the deepfake password?” And they can tell you it’s “blueberry risotto,” because that’s not a natural two words that come together. And so now you have that deepfake password. It’s not anything that you put in writing. It’s a verbally exchanged thing. And it allows you to verify that the person is who they claim to be. Because they have that knowledge that’s not documented or written anywhere, and therefore, there’s no way for anybody to have consumed that information.

Rene Novoa

So you’re talking about a personal 2FA.

John Wilson

Yeah, absolutely.

Rene Novoa

I love it. So yeah, John, that was fantastic. There’s a lot to take away here. As we’re saying today, things are going to be different tomorrow in some of the ways that we’re identifying deepfake synthetic media. But the models are getting better as the perpetrators and the bad actors are also training their new models with the way we’re combating. How are we recognizing synthetic media and things that have been changed? They’re finding ways to train it to make it look more real by utilizing more of the GAN model, where two different networks compete with each other. How can we make it appear as if we’re like a fake photo or video? And then how is it trying to trick the other one to present the most realistic imagery or media in front of us? There are a ton of websites out there that, if you need models, “I need a Latino or a Latin man or a woman that’s in this industry.” And it will generate a photo for you, for marketing, for great reasons, right? But these people are not real, and their imagery can be used in all sorts of different ways, in that, “Hey, I can describe this person,” and telling the police or the FBI, “This is who this person looks like.” And all along, it’s a generated synthetic person that’s not real. You’re not going to track down. It’s not everything your eyes see that your mind should believe. You have to do 2FA. You have to dig deeper. As professionals, we need to do more than look at individuals or the media and run through a quick scan. It’s going to take that different layered approach of the noise, the compression, the metadata stuff from Facebook and Instagram. And things that you’re pulling off that are just not so obvious, and you’re going to have to dig deep. There are a lot of things to unpack moving forward.

John Wilson

Yeah, absolutely. We have some great takeaways on the screen. I will say that, especially in this early stage, people aren’t as familiar. There’s not as much knowledge. The tools aren’t yet providing certification, validation, or authenticity confirmation. If you have questions, please reach out to someone with expertise in this area. You can’t trust what you see. You can’t believe what you hear. You’ve got to… Our approach and our statement are always “trust but verify.” If it looks like a fish and smells like a fish, it’s probably a phish, but not always. So you have to dig deeper. You need to examine it closely, enlist someone with the necessary expertise, and acquire the tools that can help identify these issues. And that’s how you push forward.

Rene Novoa

We’re right here at the time, John.

John Wilson

Yep. So we’ll open the floor for any additional questions if anyone has any. And if they don’t, then we really appreciate you being here today. We did get a question. “Do you foresee a time in the near future when deepfakes will no longer be identifiable?” I’ll tackle that one. Absolutely. There are a lot of them out there now that are very difficult to identify. What I will say is that generally, they can be identified through sophisticated deep metadata dives. Looking at the ones and zeros and identifying the light patterns, as well as the alterations to the metadata, and so on. Will the tools…? I don’t know if the tools will take the time to eliminate some of the metadata alterations and similar issues, but we’re already close to there. Some of the deepfakes today are doing a great job at making sure that there’s continuity. There are no longer videos of people with six fingers and three eyeballs and things of that sort. All the little oddities that were coming up in deepfake videos early on. People are looking quite legitimate. They’re getting the right intonations, the right dialect, and the right vernacular. They’re getting much more difficult to identify. It is going to be a tooling and looking at the ones and zeros and examining the actual bits and bytes.

Rene Novoa

One issue is that it struggles with left-handed actions. It’s very random, but we are finding it very easily when they try to do things with the left hand. If you instruct something to do things with the left hand, for whatever reason, a lot of the models struggle with trying to duplicate left-handed actions. It just doesn’t understand that. And that’s true today; we’ve seen many models do something with their left hand. It’ll still come from the right side, or it’ll come from the left side with the right hand. Which is some dead giveaways, and that’s more of a trivial thing. But we’re looking for these different patterns, where it doesn’t know what to do because it is not able to replicate humans a hundred percent yet. So we’re getting close and trying to learn these little tricks and tips. Stay tuned for some future tips and tricks from HaystackID.

John Wilson

All right, we have one more question that I’m going to tackle, and then our time today has expired. The question is, “Do you think when uploading general videos, deepfakes or not, a blockchain verification might be added into our future for integrity?” Yeah, certainly. Blockchain is a strong consideration because it matches bits and bytes to an immutable ledger of information. Several software companies are exploring the creation of blockchain-based authenticity or verification to confirm, “Yes, our tool created this video.” What that’s going to look like and what actual adoption will be, I don’t know yet. In closing today, thank you for joining today’s webcast. We do truly value your time and appreciate your interest in our educational series. Don’t miss out on our upcoming August 20th workshop with the EDRM, “Building eDiscovery Expertise: Where Education Begins and Never Ends.” During the program, legal tech pros will share actionable strategies to support continuous learning and long-term growth for professionals at all stages of their eDiscovery journeys.

Check out our website, HaystackID.com, to learn more, register for this upcoming webcast, and explore our extensive library of on-demand webcasts. Once again, thank you for joining us today for the webcast, and we hope you’ve had a great day.

Todd Tabor

Thank you.

Announcer

Thank you. That wraps up our masterclass. Thank you all for joining us today. A special thanks to our speakers, Rene, John, and Todd, for their time and efforts in preparing and delivering the session. As mentioned earlier, the session was recorded, and we’ll be sharing a copy of the recording and the slides with you in the coming days. Thank you once again, and enjoy the rest of your day.


Expert Panelists

+ John Wilson, ACE, AME, CBE

Chief Information Security Officer and President of Forensics, HaystackID

As Chief Information Security Officer and President of Forensics at HaystackID, John provides consulting and forensic services to help companies address various matters related to electronic discovery and computer forensics, including leading forensic investigations, cryptocurrency investigations, and ensuring proper preservation of evidence items and chain of custody. He regularly develops forensic workflows and processes for clients ranging from major financial institutions to governmental departments, including Fortune 500 companies and Am Law 100 law firms.


+ Todd Tabor

Senior Vice President of Forensics, HaystackID

In 2021, Todd Tabor joined HaystackID and is currently the Vice President of PMO, Forensics. In this role, he is responsible for the identification, hiring, training, and development of HaystackID’s Forensic Project Management Team as well as developing the processes and procedures of that team. Prior to joining HaystackID, Todd was the Executive Vice President of Operations for Veristar.


+ Rene Novoa, CCLO, CCPA, CJED

Vice President of Forensics, HaystackID

As Vice President of Forensics for HaystackID, Rene Novoa has more than 20 years of technology experience conducting data recovery, digital forensics, eDiscovery, and account management and sales activities. During this time, Rene has performed investigations in both civil and criminal matters and has directly provided litigation support and forensic analysis for seven years. Rene has regularly worked with ICAC, HTCIA, IACIS, and other regional task forces supporting State Law Enforcement Division accounts and users in his most recent forensic leadership roles.

Assisted by GAI and LLM technologies.

SOURCE: HaystackID

Written by:

HaystackID
Contact
more
less

PUBLISH YOUR CONTENT ON JD SUPRA NOW

  • Increased visibility
  • Actionable analytics
  • Ongoing guidance

HaystackID on:

Reporters on Deadline

"My best business intelligence, in one easy email…"

Your first step to building a free, personalized, morning email brief covering pertinent authors and topics on JD Supra:
*By using the service, you signify your acceptance of JD Supra's Privacy Policy.
Custom Email Digest
- hide
- hide