About Episode:

The Legal Tech Briefs podcast explores the intersection of technology and the legal field. In this episode AI and Legal: Can We Trust It? Experts Simran from CaseFox and Anas from Qudah Law Firm discuss the legal challenges presented by artificial intelligence (AI) in the legal industry. They delve into topics like AI impact on privacy laws, the need for updated regulations, and concerns about potential job losses due to AI. The podcast also addresses the importance of establishing legal frameworks for holding AI accountable. In a creative twist, the guest lawyer proposes a new law that mandates transparency in AI decision-making processes, emphasizing the need for responsible and ethical AI development and use.

Anas Abdul Alhameed Al-Qudah :

Legal Consultant in the Firm’s High-Tech team. Anas provides tech firms of all sizes and fields, FinTech, RegTech, WealthTech, InsurTech and other related clients with cost-effective and result-oriented counsel in order to optimally position them for long-term growth and to ensure that day-to-day operations are in line with the admissible tactics of strategic management of Tech firms. Anas was assigned to the national committee for preparing a National Code of Artificial Intelligence Ethics by the Minister of the Digital Economy and Entrepreneurship, which is a committee concerned with codifying the principles and values to be followed in designing, training and operating artificial intelligence systems. Anas is an alumnus of Reading University (LLM), 2020 UK, and obtained the First Class Honours. During his studies, Anas presented numerous academic research on some of the contemporary issues in legal aspects of some applications under the EU and the UK policies, including the liability for detriments ensued from algorithmic malfunctions, the role of artificial intelligence in achieving clarity on market data and legislation in investment decision-making, the effectiveness of robotic judge as practical solution to the crisis of the legitimacy of international investment arbitration, as well as the difference between cybersecurity and cybercrime.

Simran Sinha:

Simran Sinha is a legal tech writer at CaseFox, where she is at the forefront of a movement aimed at revolutionizing technological reforms in the legal industry. With a keen eye for innovation and a passion for law, Simran brings a unique perspective to the intersection of technology and legal practice.
Her approach is deeply rooted in understanding the needs and challenges faced by legal professionals in an increasingly digital age. She doesn’t just analyze the issues; she advocates for practical and effective approaches to resolving legal matters through the lens of technology and law.
Simran Sinha is not just a host; she’s a visionary in the legal tech world, dedicated to creating a more efficient and accessible legal system for all. Tune in to her podcast to gain invaluable insights into the future of law, technology, and the profound changes they’re bringing to the legal landscape.

Podcast Transcript

Hi Anas, welcome to CaseFox, it is the Legal Tech Briefs channel. So this podcast is where we discuss, yeah, so here we discuss about the latest legal trends in the legal tech industry. So yeah, so in this episode we have joined by Anas from qudah Law Firm.

So we like, he is one of the like legal junkies, so if I tell, so yeah, so we are having him here and we would be like talking about AI and how AI is actually impacting like legal firms and in general as well.

So we have a few questions which we would be asking him, so let’s see how it goes. So yeah, Anas, over to you. Thank you so much for having me today. Just introduce myself, Anas Al-Qudah, a tech legal consultant at qudah Law Firm from Jordan.

And it’s quite nice to be with you today, talking about one of the trends that concerns the entire market, which is about the interaction between AI and the legal domain.

Thank you. Yeah, so yeah, so my first question like for you would be that, how do you think like AI is affecting privacy laws, what we have read? So what is your take on that? Well, definitely the appraisal of AI undoubtedly has significant implications for personal

data privacy. We all know that AI technologies become more sophisticated and they have the potential to collect and analyze and process vast amounts of personal data.

This actually creates both opportunities and challenges for data privacy regulations. We can view the impact on personal data privacy laws through several points.

Let’s get started with the first point. We believe that AI systems require large amounts of data to function effectively, often including personal information.

This heightened data collection raises concerns about the sensitivity of data being processed, which ultimately leads to potential privacy risks.

Yes, we agree. Second point is that AI driven automated decision making can actually have significant consequences for individuals. If these decisions are based on sensitive data, such as health data or financial data,

the right to know how those decisions were made and the ability to challenge them become a vital privacy concern. And I should highlight the fact that we do have something called Black Box AI.

The current technical capabilities are not able to reveal how data were processed in Black Box AI. So we actually have no legal solution towards this concern regarding the Black Box AI.

Okay, that was interesting. The third point is that AI applications often involve international transfer of data. It poses challenges and it creates an actual need for unifying different data protection

regulations across countries and jurisdictions. And the crucial thing is that we need to restructure our legislative infrastructure as some laws become outdated.

Nice. So yeah, that was an interesting answer for us. We have been hearing a lot about the potential of AI leading to job losses. You must have heard about it. Like many of the people are leaving their jobs and losing their jobs because of AI.

So what do you think about it? What I believe is that the human intelligence cannot be replaced in any way. Because at some point, even the human intelligence produces the AI-enabled products, the human

intelligence has to supervise the conclusions of AI. Because I mean, if an AI malfunctions, someone has to be held responsible for that.

Exactly. That’s the interest of individuals. Yes. So do you think like government should introduce new regulations or policies to address this issue about the job loss and other things? And if the government should like have some policies coming up, what do you think?

Like what do you suggest to the general public? Like we just want to know, like if there’s any suggestion coming directly from you. Yeah, we believe that government should consider introducing new policies to address potential

job losses that might result from the AI-driven automation. We do have maybe some options to follow.

One of the options is that governments can invest in reskilling and upskilling programs to help displaced workers acquire new skills for emerging industries and technologies.

Yes. So and also we do have another option that should be in place together with the first option, which is to regulate automation.

Right. Yeah, introducing regulations to moderate the pace of automation in certain industries might help mitigate sudden job losses because, you know, this has societal ramifications.

If a large number of people lose their jobs, I mean, the country might be in a crisis. Also, government may introduce policies to promote job creation in areas where human

skills are still essential and AI can complete the human work. I mean, right, it should not replace humans. It has only to facilitate the job of humans.

Exactly. Like that was a very interesting point mentioned by you. Like it should never replace it, but it should actually help to facilitate it to do better. So that’s something which we motivate to and coming from you, it seems like and actually it drives me to my next question, which is that, you know, AI systems, as you already

mentioned, that the AI system is becoming more autonomous and is capable of making decisions. It is now increasingly becoming important to establish legal frameworks to hold them accountable for their actions or mistakes. So how do you suggest that how can we hold AI accountable?

Okay, interesting question. Yeah, we need to establish a comprehensive legal framework for AI. Establishing legal frameworks to hold the AI systems accountable for their actions or

mistakes is not an easy challenge because traditional legal philosophy often centers around human responsibility and the theory of agency.

However, with the increasing autonomy and decision making capabilities of AI, we can place a new approach to address such a paradigmatic shift.

The question is how we can start. At the first stage, we have to re-evaluate the traditional legal philosophy. AI systems become more autonomous and the traditional concept of holding individuals

accountable might not directly apply to AI. AI being a non-human entity lacks intentionality and moral agency,

making, as I just said, traditional liability principles challenging to enforce. Therefore, it’s crucial to re-evaluate the philosophy to accommodate AI accountability. There are two practical solutions.

One of them is adopted by the US. This approach holds AI producer responsible for any damages caused by the AI systems.

This idea is based on the fact that as the producer benefits from the potentials of AI products,

he has to bear the costs of such a product. It’s an economic principle that the US adopted. Actually, the US took a step ahead and issued a new law, relatively new.

It’s an Algorithmic Accountability Act. I think it was issued four years ago. So, it’s new, but it’s the only act that I found comprehensively

streamlines what we call deep learning capabilities when the AI uses the decision parameters. But the second approach or the second option that the EU has adopted is to create an insurance system

where AI producers contribute to a shared pool of funds. So, when they start to produce AI-enabled products, so it will be mandatory for them to contribute to these insurance systems.

For this damaged people from the AI malfunctions can claim their financial rights from this insurance system. Okay, I see. So, that is something very interesting which we have heard from you. So, we were not aware and some of our viewers also might not be aware about it.

So, that’s very cool to know that. So, thank you so much for that answer. And now we are heading to our next question. What do you think is the most pressing legal challenge which is coming in the near future posed by the rapid advancement of AI? Okay, because regulations are always restrictive.

AI technologies continue to evolve. So, the legislature has to continue to find new ways to facilitate their development and not only impose restrictions every now and then.

I think this is not an easy task, but it’s solvable and can be achieved easily. It does not require or take that much effort if we closely collaborate with the lawmakers

to update them about the market trends and the new applications of AI and share with them our beliefs and the consequences of the implementation of these applications.

By working with them, we can actually develop effective and adaptable regulations. I just want to mention that Jordan, the country that I belong to, established ethical guidelines for the applications of AI, how AI producers

can use the AI applications in light of ethical principles to protect the mutual interests for both end users and producers. That was one of the greatest initiatives that must have been taken because I guess

with the rise of this and taking a particular step like this could have a significant effect. So, I guess we are coming to the end of the podcast. So, there would be one last question. Now, I want you to think, okay, so tell me what kind of law or regulation would you create

if you could propose one new law or regulation related to AI? Okay, if I could propose one new law or regulation related to technology, related would be the AI ethics accountability.

The act would aim to establish clear and comprehensive guidelines and standards for the ethical development and deployment and the economical benefits of the implementation of AI technologies.

This law should address the potential risks and how we can maximize the benefits of these applications in every sector. I mean, educational sector, the legal sector, the health sector, all sectors should be.

Okay, so that was one of the most insightful points which we had. Like this one was really informative for me. So, yeah, that’s a good point. This has been a very, very informative discussion. And I really thank you for joining the call.

And I guess our listeners must have enjoyed this whole podcast. Yeah, I just want to thank you for joining us today. You’re welcome. And thank you for your effort and time organizing this podcast. Thank you so much. Yeah, so thank you so much.


Leave A Reply