Real estate

Brooke Anderson-Tompkins on the ‘lifescape’ of AI

Editor-in-Chief Sarah Wheeler spoke to Brooke Anderson-Tompkins about responsible AI and the scope of implementing AI in your business. Anderson-Tompkins was chairman of First priority mortgage for 15 years and is a former chairman of the Community Mortgage Lenders of America (now Community Home Lenders of America). She is now the founder and CEO of Bridge AIvisie and will be a speaker at HousingWire’s AI Summit.

This interview has been edited for length and clarity.

Sarah Wheeler: You went from running a mortgage company to starting an artificial intelligence consulting firm. What motivated you to enter this new field?

Brooke Anderson-Tompkins: Jump is a great way to put it! The short answer to your question is that I was driven by a passion for innovation and a desire to use AI to create impactful solutions for the industry.

It definitely wasn’t about the hype, but that told me to the extent that when I looked at the things I was passionate about, the opportunity to incorporate artificial intelligence into the mortgage ecosystem definitely made me want to do it. And the real possibility of creating efficiencies, reducing costs and preserving what I call the “heart of the people” certainly had my attention.

SW: How does your background shape the way you approach BridgeAIvisory customers?

BAT: Having spent almost the last two decades in the real estate industry, I have first-hand reference to the drivers – starting on the real estate side and then cascading through mortgage and corporate core services. And then I was based out of New York, so I spent a fair amount of time in regulatory and compliance, and that also translated over time into my many years in DC working on the advocacy component. And all of these things are largely similar to business.

See also  Reverse mortgage lenders are ready for lower interest rates

Many of the business components can span across business genres. So especially when it comes to AI, there are components that have a broad scope and can be easily applied to business in general probably 80% of the time. And staying involved in the advocacy piece is an important part. We don’t want another Dodd-Frank and the cost implications that come with it. The BridgeAIvisory approach is very similar in many ways, in that I don’t view AI as a silver bullet.

It has great potential if it is strategically considered, implemented, trained and monitored – for whatever benchmarks or ROI – and if the principles are incorporated from the start. There are opportunities for much better results.

SW: What conversations are you having about AI right now?

BAT: It’s been interesting to me in the few months since I introduced Bridge AIvisory that the conversation is starting just as the AI ​​Summit is expected to start in a few weeks! It starts with setting levels on the language of artificial intelligence. And I call it “from the boardroom to the break room.” It’s not enough to have a session around the AI ​​language, but then use and integrate that language to build a comprehensive strategy and identify what value you bring to the table. And then that sets the stage for what’s called a clean sheet of paper process – a concept that Elizabeth Warren introduced to me several years ago.

And what I’ve learned is that the same words can have different meanings and different contexts and still be accurate. And so identifying right from the start what those definitions are for the project at hand, and repeating them often, can be a key to successful execution, because language becomes part of the culture and culture is a key component to success.

See also  As housing demand declines, foreign investors should not be overlooked

SW: We’re excited to have you speak at our AI Summit on Responsible AI. What does that term mean?

BAT: My response stems from the training I received from the Mila AI Institute In Montreal. Mila is a globally recognized deep learning research institute founded by Yoshua Bengio in 1993. Part of my premise here is that it is very important to learn from experts.

There is not yet a globally recognized definition of responsible artificial intelligence. For BridgeAI, I adopted Mila’s definition: “There is an approach whereby the life cycle of an AI system must be designed to maintain, if not improve, a set of fundamental values ​​and principles, including the internationally agreed human rights framework. , as well as ethical principles.” And it goes further by “referring to the importance of thinking holistically and carefully about the design of any AI system, regardless of its application area or purpose. It is therefore a collection of all the choices, implicit or explicit, made in the design of an AI system’s life cycle that make it irresponsible or responsible.”

We’re so used to, ‘Okay, here’s the definition. Give me my job, let’s go. But AI is a lifescape – it goes so much beyond business. We’re used to something like Dodd-Frank, and that had an impact on the financial services industry. We focused on that and started working to solve the problem. This is so much bigger than that.

So I think we have to be conscious when creating the solutions to keep these things in mind. And ultimately, the good news is that if you look at that definition, the core principles are things that we’re all very familiar with: they’re ethics and values, transparency and explainability, accountability and governance. It is about safety and soundness, privacy and data protection, inclusivity and diversity and environmental sustainability. The good news is that we already do that.

See also  Who is Charlie Puth's wife? Meet Brooke Sansone

However, I don’t think we necessarily look at all of these pieces while working on a particular project. And that’s part of the responsible AI piece of looking at that holistically as part of a project base.

This is part 1 of this interview. Tune in for part 2 next week.

Related Articles

Back to top button