NSW Department of Customer Service is looking to the multilingual capabilities of generative AI for more accessible and inclusive public access to information and services.
Speaking at a Salesforce Agentforce event in Sydney, director of digital strategy, investment and architecture Daniel Roelink highlighted "exciting" opportunities with GenAI around accessibility and inclusion.
Roelink said it's still early days, noting that large language models (LLMs) are yet to cater to the full range of languages spoken by citizens of NSW.
“[The models] need some form of adjustment for the different dialects and different nuances within language," he said.
Roelink said governments broadly are addressing this challenge in innovative ways, such as by crowdsourcing assistance from local communities "to help retrain the models to the dialect."
Assuming the fluency of the models improves, there is a “huge opportunity” for them to aid in the delivery of accessible and inclusive services.
Roe said the natural language and multilingual capabilities of generative AI models set them apart from previous iterations of AI.
"Generative AI has brought forward a new way in which we interact with systems and information," he said.
Early GenAI work at the department followed a familiar pattern of trying to understand the technology and how it could be used in government contexts.
“We went through a huge amount of excitement and a lot of people just want to use the technology for the sake of using technology, but we know from experience that's not a great way to go about things," Roelink said.
“We're at a point now where we're trying to educate across government, what are the specific differences with this technology to help [staff] understand the problem that's suitable to be solved with the technology, but also when it's not suitable to be used.”
He explained there is still a need for education on the differences between generative and other forms of AI.
The focus is also now on scaling up use cases responsibly.
Roelink said the government has adopted a “principles-based approach" around ethical and responsible use.
“These systems are trained on data, so the question is, is the use of that technology ethical?" he said.
"Through questions around transparency, trust, fairness, and accountability … you can start to determine whether or not the use of that system is appropriate for the use case.”
While various jurisdictions debate the merits of legislating responsible use of the technology, Roelink noted that responsibility ultimately rests with the department or agency that chooses to use AI technology.