
Volume 6, Issue 5
Welcome
Welcome to our fifth issue of 2025 of Decoded - our technology law insights e-newsletter.
ANNOUNCEMENTS
Please join us in welcoming William S. Thompson, the immediate past United States Attorney for the Southern District of West Virginia, who has joined the law firm as Counsel. Will brings decades of experience across the legal, judicial, and business arenas. At Spilman, his practice will focus on litigation, with particular emphasis on alternative dispute resolution and white collar criminal defense.
Congratulations to 18 Spilman attorneys located in West Virginia and two of our Virginia attorneys for being recognized by Super Lawyers for 2025. The objective of the Super Lawyers selection process is to create a credible, comprehensive and diverse listing of outstanding attorneys from more than 70 practice areas. The annual selections are made using a patented multiphase process that includes a statewide survey of lawyers, an independent research evaluation of candidates, and peer reviews by practice area. Many of these attorneys practice in our Technology Practice Group and ancillary departments.
During the summer months, our firm is pleased to host a talented group of law students, who get the opportunity to research and write, shadow our attorneys, and learn about the practice of law in a firm setting. As young professionals still deeply involved in higher education, our Summer Associates will be contributing to our summer publications and sharing their perspectives as both students and future legal practitioners. Please join us in welcoming Addison Gills to the Decoded team for this special summer edition.
We hope you enjoy this issue and thank you for reading.
Generative AI May Shoulder Up to 40% of Workload, Some Bank Execs Predict
“Six in 10 bank executives list generative AI as a top investment priority this year, according to a survey published in April by KPMG.”
Why this is important: Banks are spending big on generative AI (genAI). That’s the thrust of this article. Two hundred U.S. bank executives were recently polled for a report published in April. That report showed that six in 10 say genAI is a top investment priority for this year, while 57 percent said genAI is an integral part of their long-term vision. Half of those executives stated that their banks are actively engaged in pilot projects to use AI in fraud detection and financial forecasting. Thirty-four percent are using AI in cybersecurity. About three-fourths of those executives said that they expect AI to handle up to 40 percent of their teams’ daily tasks by the end of 2025. These trends are fueled in part by the competition brought by fintech firms. The banks that are able to successfully weave AI and genAI into their operations – and train their people on how to use it and maximize it – stand to gain a massive competitive edge over those who don’t and who may find themselves being left behind as early as the end of this year. --- Nicholas P. Mooney II
Hacked FMCSA Accounts Continue Targeting Shippers, Brokers
"Hackers exploit FMCSA accounts to impersonate carriers and steal cargo.”
Why this is important: We think we know what freight theft looks like. It is bandanaed bandits holding up stage coaches in the Old West. It is the mafia hijacking loads out of JFK airport in the 1970s and 1980s. It is people breaking into trains to steal Amazon packages when the train needs to slow down for a tight curve. Well, freight theft has finally hit the 21st century. Gone are the bandits, the mobsters, and the opportunists. Now come the cybercriminals who are able to steal entire truckloads of merchandise from a desk chair. Cybercriminals are using stolen credentials to change Federal Motor Carrier Safety Administration (FMCSA) contact information for legitimate carriers to their own. Then, they pose as the legitimate carrier on load boards, where truckers get their loads, or they impersonate freight brokers on those load boards. The criminals then have these innocent truckers move stolen goods, or they redirect real loads to other locations so that the entire load can be delivered to their doorstep and stolen.
This scam has now branched out into all forms of freight, but it first gained notoriety in relation to the transport of exotic cars. Independent vehicle transport drivers would get assigned an exotic car to transport by a legitimate freight broker on a load board. Scammers who were watching the load boards would see this and then use stolen credentials to impersonate the freight broker and redirect the driver to a new delivery location. The driver, thinking that this was legitimate because he thought the change came from the broker, would then deliver the exotic car right into the hands of the criminals and hand over the keys. Before anyone knew what was going on, the exotic car was loaded into a container and shipped overseas, never to be seen again.
Victims of this type of freight theft have been unsuccessful in getting the authorities to stop it. In response, freight brokers are now teaming up to try and stop these thieves themselves. This includes double-checking credentials and being wary of sudden changes to contact information. Truckers are also catching on and are suspicious when the delivery location of their load is changed at the last minute. However, this does not seem to be stemming the flow of bogus carriers and fake freight brokers stealing loads. Currently, the only answer is for shippers, brokers, truckers, and receivers to remain vigilant and take the necessary steps to protect their data and loads from these scammers. --- Alexander L. Turner
USCO Copyright and Artificial Intelligence Part 3: Generative AI Training
By Shane P. Riley
In early May, the U.S. Copyright Office (USCO) released a pre-publication report—Part 3 of its series on Copyright and Artificial Intelligence. This installment examines the use of copyrighted works in the development of generative-AI systems and seeks to answer whether, and when, copyright owners’ permission is required for training AI models with protected material.
The debate over using copyrighted works in AI development is often framed as a battle between infringement (i.e., the rights of copyright owners) and fair use (i.e., the rights of the public). In its report, the USCO discusses both topics in detail after first summarizing the model-training process.
First, it finds that using copyrighted works to train AI models may constitute prima facie infringement because the works are reproduced and distributed during data collection, curation, training, and retrieval. Even outputs generated after training may infringe when the model produces a near-exact replication of, or something substantially similar to, the original.
Artists and other creators argue that their works are fed into models without permission to create substitute materials that then compete with their originals, eroding licensing income, direct revenue, and consumer demand.
Although it seems clear in the traditional sense that infringement occurs during AI development, critics warn of the public policy costs of strict enforcement. Requiring AI companies to license works could stifle technological progress and concentrate power in firms that already control vast troves of data, leaving the United States less competitive in the global AI race.
Those critics contend that the fair-use doctrine should cover training. The USCO analyzes the four non-exclusive factors for fair use as they apply to AI: (1) the purpose and character of the use (commercial versus non-profit/educational); (2) the nature of the copyrighted work; (3) the amount and substantiality of the portion used; and (4) the effect of the use on the potential market for the work.
The Office concludes that fair-use analysis is context-specific and that there is no blanket exemption for generative AI training. Regarding the first factor, training is usually commercial, and the report is skeptical of the outputs’ transformative nature because of near-replication. On the second factor, models are often trained on creative (not factual) content, weighing against fair use. For the third factor, most training involves copying entire works, again weighing against fair use. Finally, for the fourth factor, AI-generated outputs could significantly harm both the direct and derivative markets for the originals, as well as emerging licensing markets for training data.
Overall, the report finds fair use disfavored when large, for-profit AI companies produce outputs that compete with originals. Some applications, such as non-profit, academic projects undertaken for non-competitive purposes, may still qualify.
Crucially, the USCO offers no specific legislative or policy recommendations, concluding that traditional, case-by-case fair-use analysis will suffice and that a voluntary licensing market should continue to develop. It does, however, leave open the possibility that Congress might create targeted exemptions or compulsory licenses.
Since the report’s release, significant pushback and political debate have followed. Proponents of rapid AI development criticize the USCO’s refusal to declare AI training categorically fair use. Predictably, tech firms such as OpenAI, Meta, and Google oppose the findings, while prominent artists, including Paul McCartney and Elton John, support strong copyright protections, consent mechanisms, and licensing requirements.
Immediately after publication, the Trump administration abruptly dismissed the Register of Copyrights, Shira Perlmutter—only days after firing Dr. Carla Hayden, the Librarian of Congress (The Library of Congress oversees the USCO and houses all federally registered copyrighted works). On May 22, Ms. Perlmutter sued the administration, alleging unlawful and ineffective removal because the LOC is an arm of Congress, not the executive branch, and the Register can be appointed, and therefore removed, only by the Librarian of Congress, not the President.
In sum, the recently released USCO report may have provided equal parts clarity and controversy on the issue of using copyrighted works to train and develop generative AI systems. For now, this area of copyright law will continue to develop on a case-by-case basis and as seen fit by Congress. There is no doubt, however, that this topic will remain in the headlines as the use of AI continues to proliferate and both sides of the debate continue to clash.
Why 2025 is the Year AI will Revolutionise Construction
“Generative AI is the tech buzzword of the decade; American Big Tech firms alone announced an investment of $300bn in AI infrastructure.”
Why this is important: Despite the daily barrage of articles, discussions and news stories related to the development and use of generative artificial intelligence (AI), the construction industry may be one of the slowest industries to embrace AI technology. Part of the reason for this relates to the very nature of the construction industry’s work product. Large language models (LLMs) were initially developed for written text, not for product details, architectural renderings, CAD drawings, and documents filled with architectural, engineering and construction instructions. AI was not focused on analyzing construction data needed to bring projects to life in three-dimensional planning.
The landscape for using AI is expanding so that the construction industry can take advantage of the technology. AI can now monitor live stream video from a job site so that the contractor can monitor safety for workers. AI can analyze 3D building scans to building design teams and help identify construction and planning conflicts for various tradesmen. AI can help with automation of digital and mechanical systems. Further, Agentic AI allows AI to learn from prior mistakes that can be useful on repetitive projects and designs. Called Hierarchical Reinforcement Learning (HRL), goals are broken down into sub-goals for more accurate construction planning and implementation. Creating industry-native environments in which AI agents can work will help the industry improve construction durations, reduce the variety of skills and numbers of contractors in a time of labor shortages, and allow for adaptable construction practices as unexpected issues arise during construction. --- Stephanie U. Eaton
Microsoft Wants Everyone to Use an Open-Source Technology to Create an 'Agentic Web' Where AI Agents Interact with Other AI Agents
“Forget ChatGPT and other LLMs, agentic AI is where it's at now.”
Why this is important: This short article helps explain the concept of the agentic web and agentic AI. Much like how the hypertext transfer protocol (http) is the foundation for exchanging data over the internet, the Model Context Protocol would (or will) allow different systems and AI platforms to collaborate. The benefit is that AI platforms could share data to train one another to provide better results. Traditionally, AI platforms provide one function, and they run autonomously or at least semi-autonomously to perform that function. Agentic AI could allow AI platforms to learn from one another and perform many functions. Additionally, they would be able to perform functions better. For example, if one AI platform learns of a new cybersecurity threat or defense, other platforms could instantly be made aware of that threat and defense and incorporate that information into their results. However, the author warns of the danger of the opposite occurring. If an AI platform, for example, decides to implement a bad cybersecurity defense, that defense could be taught to other platforms. The platforms could be caught in a negative feedback loop. As we come closer to fully agentic AI and an agentic web, the potential for issues like this will need to be resolved. --- Nicholas P. Mooney II
Healthcare Remains Top Target for Cybercriminals with an Uptick in Hacking Attacks in 2024
“Ransomware attacks across all industries rose by 37% and are now present in 44% of breaches, despite a noticeable decrease in the median ransom amount paid, the report found.”
Why this is important: The upward trend of cyberattacks on the healthcare industry continues in 2025. A recent report from Verizon showed that ransomware attacks are up 37 percent. Despite this significant increase, the good news is that median ransomware payments are down significantly from $150,000 to $115,000. This decrease in ransom payouts is attributed to the 50 percent increase in victims following the federal government’s recommendation to refuse to pay the ransom. This significant increase in victims not paying the ransoms is also likely attributable to the fact that more healthcare facilities are prepared in advance of a ransomware attack, and have their systems regularly backed-up so that they do not need to pay the ransom to get access to their data.
With the increase in attacks, the targets for attacks in the healthcare sector are changing. Cybercriminals are now attacking vendors on the periphery of the healthcare industry as a way to get patient information. Radiology service providers, pharmaceutical firms, IT providers, medical transport companies, and pharmacies are prime targets for these new attacks. Not only are the tactics for attack evolving, but the motivations behind the attacks are evolving as well. There is an increase in espionage-motivated attacks in the healthcare industry, with those types of attacks increasing from 1 percent in 2023 to 16 percent in 2024.
Even if you are a small healthcare provider or a healthcare-related vendor, there are simple and inexpensive ways to protect your network. The most important one is to install all software patches to protect against known vulnerabilities. Another important way to protect your network is regular employee training to teach your employees to recognize evolving threats and avoid becoming a victim. Planning before you are a victim of a cyberattack is also a key way to protect your data. With the increase in cyberattacks and the increased sophistication of those attacks, having a data breach response plan in place before you are a target of a cyberattack is a necessity. This includes knowing what to do if one of your vendors is a victim of a cyberattack that compromises your patients’ data. If you need assistance with preparing to thwart a cyberattack or to review your contracts with vendors to ensure that you are protected in the event that they become the victim of a cyberattack, please contact a member of Spilman’s Health Care Practice Group for assistance. --- Alexander L. Turner
Over $1 Million Invested into Nuclear Technologies in Virginia
“These grant dollars — provided through Virginia Energy’s Virginia Power Innovation Program — will fund critical research, support workforce training and ‘further [position] Virginia as the nation’s leader in next-generation nuclear technologies,’ the governor’s office said.”
Why this is important: Virginia is staking its claim in the future of clean, reliable energy by investing in advanced nuclear technology. The VIN Hub isn’t just a research initiative, it’s an economic development engine. By fostering partnerships between universities, energy firms, and manufacturers, Virginia is creating a pipeline to enable innovation and openings for high-skilled jobs. This investment also signals a coming wave of regulatory engagement: nuclear innovation will require federal approvals, land use planning, environmental compliance, and potentially even legislative updates. As the legal landscape evolves to accommodate modular reactor designs and newer fuel cycles, Virginia attorneys will play a key role in guiding permitting, safety, and public-private frameworks.
Critically, this investment comes at a time of rising energy demand across the state. From agriculture to manufacturing—two pillars of Virginia’s economy—businesses depend on stable, predictable energy prices. And with rapid data center development straining the electric grid, nuclear power offers a low-emission, scalable solution that can help meet 24/7 power needs without price volatility. In short, this investment is a foundational step for long-term economic competitiveness in the Commonwealth. --- Addison Gills, Summer Associate
3 Tips for Improving Security Amid the Growth of Generative AI
“How can higher education institutions tackle growing concerns about the use of generative artificial intelligence for cyberattacks?”
Why this is important: Generative AI tools such as ChatGPT, Microsoft Copilot, and Google Gemini are increasingly being used across higher education institutions for everyday tasks like summarizing meeting notes and drafting emails. While these tools offer convenience and productivity benefits, there are mounting concerns about their potential misuse, especially in cyberattacks. As AI technology advances, so does the sophistication of phishing emails and deepfake content, raising alarm among cybersecurity experts.
Isaac Galvan, community program director for cybersecurity and privacy at EDUCAUSE, outlines three crucial areas for colleges and universities to focus on to enhance digital safety. First, institutions need to develop clear AI use policies to guide staff and students on appropriate and secure usage. He highlights the University of Michigan’s approach, which emphasizes privacy, security, accessibility, and equitable access as core principles in their AI implementation.
Second, ongoing education and training are essential to build a security-conscious campus culture. Galvan points to the increasing threat of phishing emails and recommends that institutions teach students and staff how to recognize and report suspicious messages. Cybersecurity awareness should also be incorporated into academic curricula and extend beyond campus to include personal technology use and social media behavior.
Finally, Galvan stresses the importance of improving identity and access management (IAM) to address the evolving threat landscape, which includes deepfake scams and AI-powered hacking tools. He advises investing in advanced IAM technologies that can distinguish real human users from malicious or automated activity, and encourages the verification of audio or video messages through trusted communication channels. As generative AI tools become more integrated into academic environments, strong oversight and proactive cybersecurity measures are essential. --- Shane P. Riley
“The state’s health insurance exchange transmitted pregnancy and domestic abuse data during a marketing campaign.”
Why this is important: Do you know where your protected health information (PHI) is going? Recently, residents of California who participated in the state’s health insurance exchange had their PHI shared with LinkedIn via trackers installed on the exchange’s website. Information submitted by the residents of the state to the exchange’s website was sent to LinkedIn via LinkedIn’s Insight Tag. These trackers transmitted data regarding whether individuals were blind, pregnant, used a large amount of prescription drugs, were transgender, or were the victims of domestic violence. The trackers have now been removed, and the error was attributed to the state’s transition to a new ad agency that was marketing the exchange on LinkedIn. The intent was to use the trackers to remind people that there was an upcoming deadline for open health insurance enrollment. The trackers were allegedly in place for a year before they were discovered, and millions of participants’ data were impermissibly shared.
Even though the breach was only discovered in late April, LinkedIn has already been sued in a putative class-action. The lawsuit, filed in the Northern District of California, alleges that LinkedIn and Google received health data from web trackers on Covered California without the knowledge or consent of users. The putative class representative alleges that the trackers violate federal and California law, including the California Invasion of Privacy Act. In addition to the filing of the lawsuit, this breach has drawn the attention of lawmakers who want to further investigate this matter and determine how this happened, how many people were impacted, and how to prevent similar breaches in the future. --- Alexander L. Turner