Episode 5
· 46:59
Isabella Scarinzi 0:00
Hey.
Welcome back to the Bible translation innovation podcast, a show brought to you by the E 10 Innovation Lab. My name is Isabella, and I am joined here by my friend, claffey. Hi, Klappy. How are you today? Hello, Isabella. I'm doing well. Thank you. Good. Well. First off, happy 2026
we want to start the new year by looking at the remaining all access goal chapters, slash languages that are at risk of not being completed by 2033 so stepping into the year 2026 means we have seven years left now, but e 10's all access goals are still projected to only be accomplished by 2041 that gap is why the lab is becoming laser focused on identifying which language goals are most at risk. And we're also very focused on identifying the innovations that are going to be needed to accelerate progress for those specific goals. If you're not familiar with the All Access goals yet, I suggest visiting e 10's website, which you can access at E 10 dot Bible for more information. And we also talked about the All Access goals on our very first episode of this podcast. So that might be a helpful recap, too. So to start the conversation, I also want to introduce you, Peter Wong, our operations director at the innovation lab. Welcome to the podcast, Peter, can you tell us a little bit about yourself?
Peter Huang 1:39
Yeah, thanks for having me. Yeah. Like Isabella said, my name is Peter Huang. I lead the help to lead the lab, specifically overseeing the operations functions of the lab. I have been with the lab since it started about five years ago, and just Yeah, moved into the operations lead role and now helping to lead this new initiative that we'll talk about today. Appreciate you guys having me.
Isabella Scarinzi 2:09
Thank you. Yeah, we're excited to have you. If Peter is joining for this podcast, it means it's an important topic for us to discuss. So as mentioned, Peter and claffey have been analyzing our progress toward the All Access goals, trying to especially identify the goals that are at risk. So it's probably a good place to start by asking you, How do we identify which all access goals are at risk, and where are we at in this current analysis that you guys have been working on?
Peter Huang 2:41
Yeah, I'll, I'll start by jumping in here. Clappy. You feel free to put a hand up stop me interject at any time. But yeah, first disclaimer that I want to say here is that what we're doing is a really specific all access, goal, risk analysis effort. It is not a it's not a duplication or a redundant effort that would reflect existing systems that are, you know, absolutely critical to e 10, such as progress Bible or rev seven, nine or some of the other data efforts that are happening amongst the the implementing partner agencies. So I think it's in part important to know and to note that what we're doing is we're actually using their data along with other data sources in aggregate to really just do an analysis on what are the highest risk languages and all access goals to not meeting 2033 clappy. I'll pause see if you have anything else.
Klappy 3:46
Yeah, I think it's important to mention that, like you said, we're not replacing or parallelizing efforts here. We're actually working alongside of and layering this effort on top of the data that we're getting from progress that Bible, rev, seven, nine and others. And I think that's important because we're, we're specifically looking for those at risk languages, and we're not necessarily tracking progress, right? Like, when, when you're looking at the beginning of an effort, you know, looking at all access goals, it's good to track like, how far are we along this effort? Well, we're actually looking at the last seven years, right? So we're closing in on 2033 we need to be looking at what's at risk. So you actually look at the data differently this way. So that's what's shaping our perspective. You know, before we actually talk about how we're doing that, you know, the goal here is looking at those things at risk, not the things already done.
Peter Huang 4:49
Yeah, yeah. And just to put a little bit more context around this, the E 10 steering committee just a couple of months. Ago, has started to come out with what they're calling an innovation rollout strategy, you know, really targeting these at risk languages with a with a hyper focused view on 2033 and as part of that, you need to have sharpened data in order to make, you know, data informed, smart, strategic decisions about what to be aimed at. So as just part of that language analysis, the lab is really helping to spearhead that effort, at least in this initial phase, and then we'll have a task force that I'll talk about here in a second, but we'll move into a second phase, and then we'll move into a third phase. So this is really a three phase effort. This first phase we have just finished recently, and it was really just a kind of full scale initial data pull and aggregation, getting everything together, getting everything standardized in the way that that we are looking at it, and then really splicing the data and filtering data and organizing data. And then clappy actually built kind of like an AI. I'm not the AI guy here, so I'll pass it to clap here in a second. But essentially had some some AI functionality that was thrown at the data that we had that really helped to do some analysis for us, as well as our own kind of, you know, human view at things. So as we finished this first phase of taking every you know, work record available, every piece of language data available, collating things, splicing it, and then using AI as well as an initial view. We're finishing up this first phase. I'll let clavi talk a little bit more about the AI, but after that, I can go into kind of phase two and phase three. And what we've done there, great.
Klappy 7:01
Yeah, this is, you know, touching on, like our last episode we talked about using AI for software development. So this was one of the use cases that I really enjoyed, being a part of this initiative with Peter for, I would say, maybe 1520, years when I would first build proof of concepts for software development, I would actually start in Excel spreadsheets, because you can actually model data flow through sheets to sheets with macros, and kind of get an idea of how To manipulate data and using, using what those as examples to then go build software off of, and then you could actually look at the data and make sure your algorithms and software match the outcomes on the spreadsheet. And so that's actually where we started with this initiative. Peter, you had worked hard on going on data exports, you know, from from progress, dot Bible and rev 79 analyzing the data, looking at the data that does exist in those systems. How do we look at that in filter and start looking at what remains. And then once we look at what remains, you know, you we can actually look at the spreadsheet view of the filtered data. And so I got that and was able to talk with an AI, kind of like I explained in the previous episode, walk through iterative process with different AIS to replicate what you did in the spreadsheet, so that we could then, in the future, just drop in a flat file, CSV, you know, like an export, drop it straight into the system, and it automatically runs that same thing you did without having to repeat the lengthy analysis that you did. So the first step that I did when I got involved was making sure we could repeat what you did in your spreadsheet. And that was that was a really cool experience, being able to test out AI for this process.
Peter Huang 9:17
Yeah, yeah. What I mean. So again, as a non AI practitioner, non AI expert, it was incredible for me to see how, just how helpful what clappy was doing was going to be to this process. Because essentially, what I had to do was just do, kind of like one initial, you know, proof of or I don't know if proof is even the right word, but just one initial example of what I wanted to happen in the future every time there's a data update or new new records being entered, or whatever it may be, and something that would normally take me personally, a good amount of time to have to if. You know, collect, collate, and then reapply everything manually, coming from a number of data sources, which I know there are other ways to do, but the AI is able to just take those flat files and do that, you know, instantaneously, and then even give us a really nice kind of visual like UI dashboard, which is really helpful. So, yeah, so the AI application that clappies helped with was a massive help. Really nice way to be able to look at the data in phase one. So at the end of this phase one, this is really, you know, a long, long answer to your question of how we identified the at risk languages. But at the end of this phase one, initial poll aggregation and analysis, the reality is that we only know what we can know from the data, and we also know that not not all of the data is extremely accurate. So what we did was we made some we basically just made some calls based on what's in the records that we have, knowing that some of it will be wrong. And we intentionally chose to look at the worst case scenario. What if every record is exactly correct? Nothing else is going on that's not updated, and this is all we know and all we have. And so as we filtered that down, we've been able to see a couple of categories that are really obvious. If these records are true, these languages are absolutely going to be at risk of not hitting their goal in 2033 The good news is that only constitutes a smaller portion of the languages, then there's sort of a second kind of middle gap, or middle group of languages, where some research follow up, confirmation is needed. If the records are true exactly as they're stated that we have now, then those are probably going to be in the at risk category as well. But we feel like there's probably a good chance that a lot of those records are probably not updated, or there's just more work going on than what we actually have. And so if that's the case in those those languages may not actually be at risk of not hitting their all access goal by 2033 and then there's a whole large group that we would say we need to confirm pace. We already know that they're active. We know that there's a partner working in those languages, doing translation work, and that's 1000s of languages, and really we just need to do some data confirmation and some some kind of follow up. Those would be the ones that are kind of in the lowest risk category. But again, a lot of this is just data cleanup. If we don't know if the record is correct, then we can't truly know the status of that language in the All Access goal. So that's that's kind of what we ended up with on phase one. And then before going to phase two, I'll pause see it looks like clapping. Maybe you have something,
Klappy 13:09
yeah, to make it tangible, we to oversimplify it. We looked at how much time we have left, and then at the average pace, there's some obvious things that will likely be at risk to not be completed. You know, some of the goals are two full Bibles. Others are a full Bible. Some are New Testament, and then some are scripture portions. So obviously, like seven years for 25 chapters in the Scripture portions. You know, those can probably be done on time, even if you start a couple years from now. Those aren't our at risk languages that we're really focused on. What we're more focused on is the big, obvious stuff. Like, if there are two full Bibles that aren't done yet, like, we really got to consider, like, where are these at in the progress? Like, is there? You know, how many chapters are left of those goals and assess that a full Bible is going to be another, another big one if you haven't started a full Bible or even started language development, because keep in mind these remaining languages are some of the hardest languages, meaning some of them don't even have an orthography. So if they haven't started language development and have a full Bible remaining. I mean, that's, that's a red flag that we need to be looking at and focusing, zeroing in on, and seeing where are we at. Is the date, up to date, that sort of thing that Peter was already referencing. And if we have a New Testament, you know, it's less at risk. You know, it's like a, you know, if we had a shades of red that would be down, you know, it wouldn't be quite as alarming as the others, but the New Testament, obviously, is going to be much more easily done. But a lot of those once again, you you have language development that needs to be done. Well, we'll talk about what we need to do with these languages. But. So, yeah, that's kind of a way to help visualize, you know, what is the scope of work that's remaining, and what is the time left that we have, and is there a likelihood based on past experience and looking at past pace metrics, are they on track to be able to complete in the seven years remaining, by 2033
Peter Huang 15:22
Yeah, yeah, thanks. Clappy. That's a that's helpful color around some of the things I was saying. So, yeah, applying, to the best of our knowledge, a pace effort or a pace metric, helping us to sort of determine what the effort may need to be to get certain languages back on on pace, and then really getting to the follow up research. That's that's kind of the next stage. So right now, we're at the we're at the moving into phase two point in this where a task force will be put together, made up of innovation lab, progress, Bible, rev, seven, nine, and then a couple of the most major data contributors in the E, 10 space. That's, that's what will constitute the the task force that's going to help us with this follow up research to do some data confirmation.
Isabella Scarinzi 16:22
So we kind of started touching on this a little bit, but I'd love to dig deeper into the results that you guys have been seeing from this phase one, AI data analysis based on the data that was already available to us. So what are some of the trends that maybe you see coming from this data? Can you quantify it for us? Can you point towards regions that we're seeing that are most at risk? And is there any like is there a pattern where innovation or new approaches might be required?
Peter Huang 16:54
Yeah, yeah. So I'll, I'll talk about some things that I've seen. I'll let clap talk about a few things as well that I'm sure he's seen. We we've learned a lot from the AI's ability to kind of analyze this raw data and put it together in in ways that help it. It kind of allows the data to be easily digested and easily visualized in in different formats and different kind of ways of cutting it and putting it together, which has been really helpful. And so it pretty much confirmed some of our some of our initial suspicions. And by the way, like every time we upload a flat file, this is like something that's that's starting to hit, like the hundreds of 1000s of individual records of different record types. So it's, it's not just like an easy amount of data for a person to pull in and kind of make sense of themselves. So it's, it's really able to kind of confirm and and then really again, make it digestible in a way that that people can understand. But it was able to kind of confirm these tiers of risk, like the most obvious at risk things or languages that we would be looking at on the list, kind of a highly likely to be at risk, and then, sort of a medium needs confirmation at risk, and then, and then there's kind of a lower tier where we're going. We're going. We're not going to rule this out yet. It's probably not at risk, but it needs some confirmation just to make sure that it's not at risk. And when we say something's at risk, we basically mean that the pace metrics we're applying would not indicate that it will finish by 2033, based on all records we have. And so it was able to really pretty cleanly and clearly splice out the data that way amongst kind of risk tiers, and then display it in all kinds of different ways. So one of my favorite things that I like to look at is regionally on the dashboard that we use, which is just internal right now, we kind of, well clappy kind of developed this internal dashboard that we look at, and it's got, it's awesome. I mean, it has an AI assistant on it that you can just ask questions about the data, and it will go through and pull things out for you. It'll answer basically any question you have on again, six figures plus of data, you know, of records. It can it can visualize it in any way that you ask. You know, put it on a map, show me with, like a heat map, a density map, or graphing of this stuff. And so that, that, that AI has really been able to show sort of a density and heat mapping. And we've seen some regional trends, some obvious ones that we've already known. You know, Southeast Asia, East Asia, West Africa, places that are known to be difficult. But I think what's been cool is that it's not just showing a heat map of a region where we already know things are. Are challenging, or records may not be correct or may not be totally up to date, but it's, uh, it's it's taking records from multiple sources, and it's helping us to see like, okay, one record is saying there's nothing here. Another record saying there's something here. Another record is indicating something happened here in the past, but maybe isn't happening now. Isn't happening now, and we're able what we're able to kind of really quickly identify leads to follow up on on certain areas. And so East Asia, Southeast Asia, are already areas that we've started to kind of make movement on and progress towards, just based on some of the insights that the data is giving us regionally, and some of those hard to reach areas,
Klappy 20:48
yeah, that's a there's so much there. Like, it's hard to imagine. Now that I look back and look at all the data sources we pulled together, it reminds me of, like, when I got started on this, I told, told you, and Dow, like, honestly, I have no idea how this is going to work. Like, I have no idea. I have no vision in my head yet of how we're actually going to create a dashboard for this, but I'm pretty confident I can ask an AI to interview me, and I'll just work through the meeting notes that we've already had to answer some of the questions, I'll even upload some of the transcripts from the meetings that we had. And Peter and I did an interview, like I interviewed him, and we talked about it for like an hour. I took those meeting notes from from him walking me through all the work he did on Phase One analysis, and then we basically uploaded those meeting transcripts with my own interviews with the AI and I asked it, you know, what can we build? How do we replicate what Peter's done? And was amazed at like, how quickly and cleanly it was able to help us produce a usable dashboard. Not only that, but I expected it take minutes or longer to churn through all the data, but we just have raw data sources that it parses in memory, at runtime, in your web browser, on your phone, instantaneously, like Peter was talking about the that hundreds of 1000s of records, you know, Total, that we're parsing like that happens within two seconds on page load in your browser, and you can render and see and on top of that, it's already coalesced and correlated all the data together to ensure that when you render a table, that table renders instantaneously, and you can actually See on one single table like multiple data sources. And these data sources are not only our internal data sources between our partners, but as we put our heads together at the lab, we started talking about other indicators of risk, like the UN does this yearly or semi yearly. They call it the multi dimensional poverty index. And so with that, with that number, we can actually start looking at maybe some other risk factors that we might consider. There's other data sources that we're pulling in together, you know, similar to that in injecting into our data in it's been helpful for us to look at different lenses that may or may not be exactly what we're looking for, but at least hints at things that might slow down the pace.
Peter Huang 23:38
Yeah, yeah. So going off the same stuff here, but that the idea that you can pull in, instantaneously, write like so many records from different sources is and just in a bespoke way, tell it create a view, or tell me numbers, or do X, Y and Z, with the data and so we're getting you can take, you can take existing products, previous work records, current work records. The multi dimensional poverty index has, has a dozen kind of sub metrics that are actually really helpful for assessing sort of a an area, regionally, and a number of different concerns there, or potential concerns, I should say, with starting projects or work there, it can take all of these things. It can take, you know, lwcs in the area or the it so anything from geographical data, language data, work record data, product data, human data. And you it's like the possibility is sort of what your imagination allows that you may need for whatever you're trying to figure out. And so you. Yeah, what clappy is describing has been, for me, a non kind of a non technical, non AI person. I mean, it feels like you have a research assistant that can finish things in like three seconds for you that you don't have to worry about, like the human interaction with it's like just having a research assistant in your back pocket that can instantaneously give you answers, and it's been really, really helpful. So it's a it's a cool system for now, we have to keep it internal, because I think some of the data, the data use agreements, and some of the legal compliance concerns. We're trying to figure out a way that we want, that we'll be able to maybe be a little bit more open or public and be able to share some of the things that we have in there. But for now, it's, it's really just us, and then the the small task force that will be working on, you know, data, follow up and record confirmation and research. But it's, it's data that we have from from all of the kind of sources that we would that we would all be familiar with with any 10 but yeah, I guess the point of this is just to say how helpful the AI is in in analyzing data,
Isabella Scarinzi 26:21
so to follow up from that we have the data we know what's at risk. From this initial phase one analysis, have we been able to identify what innovation needs to happen, where within these are at risk regions, or what are some of the innovations that maybe we have identified that need to happen. Yeah.
Peter Huang 26:44
So I think that part of the initial analysis can give us a lot of information, you know, regionally and with historical records in you know, whether it's a language or language community or cluster, or even just regionally, where a language is spoken. I think that's one factor that helps to inform the question, and then really, what part of Phase Two in this follow up, research and deeper digging is going to help to inform is where, where innovation variables can be injected in a way that's not overly disruptive. And so I think categorically, we're going to look at things like what languages or what, what projects or what, what areas, what communities, what partners are good candidates for things like some multi modality kind of injection into the work, or maybe there's no work and it's a, it's a good candidate for that. Or maybe where, where is a good candidate for some AI application, whether that be, whether that be drafting or checking, or the, you know, the number of ways that AI can be applied, or maybe an area is a good candidate for using flexible, scalable quality assurance methods and some of the cbbt principles that the lab has been working on. And those things aren't exclusive. Some areas may be good candidates for two or all three of them. That's sort of the work that is to come. One of the real challenges with Bible translation, as most most of us who work around Bible translation know is that it's really difficult to always apply kind of a blanket understanding of what's going to work and what won't work in different contexts, and sometimes it can just feel like Everything is unique and individual and has to be assessed in its own context, individually. So we're hoping we can maybe avoid having to do every single language individually as far as the assessment goes for where innovation intervention can take place. But that's going to be a major part of kind of this second research phase. Yeah, I
Klappy 29:21
think that's important. Because, you know, it's we can't look categorically at all things and then assume the solutions are the same, right? We can't just, there's no silver bullet that's going to help us address our needs to finish on time at the same time. We can't swing the pendulum to the far extreme and expect us to every every region or every particular need, to have a complete new need or way to address it. So I think there's, you know, what we've been doing is observing patterns over the past. Five years, you know, since the lab started, and we have specific things that we've been focusing on and patterns that we've been observing as we work with our partners to see what works and what doesn't. And that goes back to the same things that Peter just said, multimodality, you know, OBT first using AI, you know, when appropriate, whether it's the understanding process, can can help speed up the process. But also, there are times where AI drafting, and, you know, church based Bible translation is a huge accelerant, by giving the ownership and the motivation for the teams on the ground, the translation teams, to own the process themselves. You know, those are the biggest patterns that we've observed, and it's been exciting to see, you know, those recommendations, because they're born out of us observing our partners do these things. These aren't just things that the lab made up and said, If you could do these things, you know it'll be faster. It's we've observed these things to actually make things faster. We know they will. And what we want to do is help work with our partners to ensure that they test these and try these out in their context, and we're confident that they'll see the same types of accelerations that all the other places around the world have seen. Yeah.
Peter Huang 31:24
And one last thing I would add, just going a step further from your question of trends and patterns and things that we're learning, those things are necessary, but they're only as good as the action taken by the information received when you, when you learn or see patterns or trends. And so there's, there's a lot of complexity to analyzing translation and language and regional data in view of the All Access goals. And one of the things that clappy and I have been thinking about, talking about and kind of pondering on and discussing recently is that, you know, 2041 is the current projection based on a average of every of everything under the All Access goal umbrella. But the truth is that if 99.9% of the languages in the All Access goals were on pace for 2033 and there was one single one that was on pace for 2070 that would take that average way back, and it might actually just be one language that has some sort of innovation intervention. Need we know it's not one language. We know it's a number because we have, you know, a handful, a good, decent handful of them, because we have data that informs that. But those things that are really the highest risk, that have kind of the current trajectory being way far out are the ones that we are probably looking most closely at, for trends and for opportunity, for intervention, of innovative, you know, methodologies, technologies, and so from what we've learned and What we've been looking at, we have already been, we've already been making movement over the last few weeks, I would say, especially in the kind of a regional focus around Southeast Asia and East Asia, and then kind of West Africa will be kind of the third, I think, regional focus, because I think that's where the majority of not just remaining all access goals, but remaining all access goals, with the furthest trajectories for finish are existing right now, at least as far as we're seeing, and probably the least amount of active capacity pointed At it toward, you know, when it comes to translation, work. So, yeah, there's, there's a lot of kind of deep analysis that could be done and that can be done, and different ways of looking at things, but we want to make sure we're those things that we're learning. We're taking that and turning that into really targeted action at specific areas or languages.
Klappy 34:20
You mentioned those languages that have a date, a target date, far out. I think that was a biggest eye opener, like revelation that I had when pulling up the data through this dashboard, like being able to quickly and easily see, like there's dozens of languages on the All Access goal list that are their target date is after 2033 and you know, there's, you know, there's different ways you can look at that in, you know, it's not like those individual languages may or may not have been, no, they're a part of the all access. Goals, and hopefully they do, and we'll definitely want to work with them to better understand why that's their target date and but they're the AI assistance here has also helped us understand that looking at the pace metrics, like of how they are progressing over time we're actually able to analyze and some of those are actually on target to be finished. Even though they have a target date after 2033 their pace is actually on target. They also the opposite is also true, right? We have things that have a target date before 2033 that have pace metrics that are after 2033 so those are the types of things. When we're talking about future phases and US analyzing the data, bringing new lenses, you have to have multiple lenses to look at the same data to understand well, this is a red flag over here, but over here, everything looks good, so that's where we'll have to better understand the individual use cases to get clarity on whether or not we are on track and if something is truly at risk.
Isabella Scarinzi 36:14
So we've talked about AI being used to help us in the software development and analyzing the data. Is there any other Are there any other ways that AI will be used in this project?
Klappy 36:28
Most definitely like one of the things that an AI suggested to us was to look at those pace metrics right to be able to actually calculate them. And in my mind, I was like, Man, how long is this going to take to go through and analyze? And the first time it did that for us, it took about like, I think, 20 minutes to churn through all the data and to calculate all the pace metrics. And I basically told the AI, like, that's taking too long. How can we make this happen in a browser to where the user can click a button and it'll go calculate the pace metrics so it it figured out a way to optimize the code and pre filter and get things down to where it could do it in about 60 seconds. Well, that was still timing out in the browser. So anyway, I'm pushing the AI coding, you know, the coding agents to to optimize the code even further. And this is really for me just to test like, Hey. I mean, I I know there's limits to what can be done, but can the AI push the bounds of what I deem logical, and can AI actually optimize code better than I could really quickly, by the end of an afternoon of going back and forth with the AI, I couldn't believe it, we're down to less than two seconds to analyze that Same data to calculate the pace metrics on all languages. So it's near instantaneous, and that's even happening in your browser. So when we pull up the dashboard and we click this button that used to take 60 seconds, somehow, using AI to optimize its own code again and again and again, it's able to do it and not pre rendering the data like this is all from raw data dumps. It's able to do that in your browser, on your phone in less than two seconds, calculate all the pace metrics. So one of those things that AI is doing for us is helping us not only identify new ways to analyze the data, but helping us optimize the ability to do these types of reports and the types of things Peter was talking about, you can just have an idea. Go ask the AI to go analyze the data in a new way, and it'll generate a table for you, and it'll generate the chart so you can visualize that data. Those are ad hoc queries it may take, you know, 20, 3040, seconds to get a result from but once we identify things that we find helpful in working with partners, what they find helpful in this, then we'll be able to work with another AI to help us optimize it to where it doesn't take 3040 seconds. It'll be able to do it in less than one or two. Wow.
Isabella Scarinzi 39:23
Yeah, that's incredible. So as we're nearing kind of the end of this conversation, I've heard you guys talking about the next steps moving forward. Is there anything our listeners? Maybe translation teams, church, networks, partners, anything they might be able to contribute and help us at this stage.
Klappy 39:45
Yeah, the short answer is, help us know where you're at on your Bible translation, because what we're doing is no good. There's no value to it if the data isn't up to date or accurate, I mean, yeah, there's value. You in it, but it's really challenging to know what we're looking at if the data isn't accurate.
Peter Huang 40:07
Yeah, yeah. We We not only have to have, like, simple data around translation work that's either happening or even if it's pre translation work that is happening or has happened, we have to it's we kind of have to know the some some details and some context around some of these things to make a a well informed kind of action plan, you know, hyper focused at these at risk languages, because every every situation, every language, community, every context, is unique in some ways. And so yeah, I would say being open with data and willing to share in as confidential or as freely and openly as you're comfortable with, any sort of sharing of of information about translation work in your portfolios of of projects would be really helpful to us. The other thing, I would say, would be kind of a willingness and open, an openness and a readiness to to work with this task team that's going to be going forward in phase two, there's a good chance that somebody at your agency or your kind of affiliate translation partners will be reached out to or hear from people on this task force to confirm certain data or certain records that we might have. So yeah, any kind of willingness to to help in that effort, for the sake of seeing that the All Access goals are not, are not on pace to be hit by 2033 right now. And there needs to be, there needs to be an acceleration effort and innovation intervention in in certain areas that doesn't step on anybody's toes, but that's really hyper focused on those highly at risk language languages. Yeah, a willingness to be open and to help is is huge, especially in the more sensitive areas, because I know that that's always a big concern, and it's also because it's a concern for you, it's also a concern for us, and it's one of the most difficult areas to kind of get accurate information in. I think it's a big waste of time to spend, you know, weeks or months going back and forth on who can know what and who can try, you know, trying to wrangle data, as opposed to just getting some work going and actually creating and translating scripture for scripture access for these communities.
Klappy 43:01
Thank you. Yeah, that's really good. Yeah, it's really good, Peter, to end on that part, because really, none of this data is just data. What really matters is people having God's word, right? Like, that's why we do this, and so actually taking action items to implement things to help people get God's word sooner. That's the heart of it, right? It's getting this generation like for people to have God's Word by 2033 is really the heart of everything we're doing here.
Peter Huang 43:39
Yeah, yeah. It's not about it's not about control, or having the most projects, or if money is going to go here or there, or what's going to happen, like the end goal, the shared vision of E 10. The reason we're all a part of this is is to see God's word get into the hands of people who who don't have access. And so it it's just, I know there are things that are important to consider, and there are sensitivities that are important to to really be thoughtful about, but in the past and in our experience, and what I'm praying against in the future is wasting a lot of time just going back and forth and trying to get accurate information, as opposed to just having that accurate information amongst trusted partners making a plan and actually moving forward towards towards that scripture access for these communities.
Isabella Scarinzi 44:40
All right, so we're talking about being open and willing to share data. If someone wants to do that, what would be the next step that they could take? Do they come to us? Is there anyone else that they would reach out to to share their data?
Peter Huang 44:53
Yeah, most of the E 10 agencies already have an established pipeline for sharing a. For sharing their data and their information with progress Bible and or rev seven nine. You know, they work together. We're not creating again. We're not competing with any other system or anything out there. We're not creating a separate data warehouse anywhere. We're not we're not trying to duplicate the efforts that are already happening. So we would ask that you would be more open with sharing your your project data, specifically with progress Bible reps of a nine and it wouldn't only benefit the efforts of like our team. The more open you can be, the more consistent you can be with current data, the more it benefits the entire alliance of E 10 in their efforts. So we would ask that, yeah, you share it with your E 10 partners or e 10 partners with Rev 79 and progress Bible. Awesome.
Isabella Scarinzi 45:55
Yeah, thanks for clarifying. So this has been a great conversation, Peter. We appreciate you coming on and talking to us about the work that has been happening. If anyone has any further questions, they can reach out to us at lab at uten the Bible and make sure you keep an eye on the things that eaten and the innovation lab are going to be putting out, because there will be more information about this. For sure, more available data will come out. So don't forget to subscribe to this show, and we will see you at our next episode.
Peter Huang 46:24
Thanks, Isabella. Thanks Isabella, thanks clappy.
Theophany Media Media 46:34
The Bible translation innovation podcast is brought to you by the Eaton Innovation Lab. This episode is edited and produced by Jake dobrins With Theophany media. Your hosts were Joel Matthew and Christopher Klapp with facilitation by Isabella scarenzi, please subscribe on your favorite podcast platform, and we'll be with you again next month.
Listen to Bible Translation Innovation Podcast using one of many popular podcasting apps or directories.