A Government Librarian at the 2024 UKSG Conference: Part 3, by Naeem Yar
Naeem
Yar is a Librarian with the Welsh Government and serves as GIG’s Events
Co-ordinator. This is the last in a three-part series of reflections on the 2024 UKSG Conference in
Glasgow, which are written in a personal capacity.
One of the most popular aspects of Glasgow Science Centre as the gala venue was that attendees could interact with the exhibits, which was a particular positive for those who weren't part of a group (Simon Williams)
Having
decided to have an early night, I missed out on Tuesday evening’s UKSG gala. From
chatting with delegates who attended, it sounded like a good time was had by
all, with the unique choice of location in the Glasgow Science Centre being
particularly praised. Arriving in the Lomond Auditorium on Wednesday morning,
the mood was a little subdued. I am not sure how many were simply showing the
effects of being stimulated by so many ideas over the last couple of days and
how many were recovering after letting their hair down the night before –
though apparently there were plenty of repeat attendees embracing the gala
disco with gusto, so I am sure at least some will have been quietly nursing a
hangover.
Despite
that, we were treated to another fascinating plenary session, this time focused
on attempts to apply the convenience of generative AI to searching for academic
literature. This was another of the sessions which I was particularly looking
forward to. As we have acknowledged in delivering advanced internet search
training, one of the reasons that ChatGPT and other generative AI chatbots have
caught people’s imaginations is clearly the user friendliness of the inputs
required and outputs generated. Being able to input a question in natural
language and receive a similarly human-like response is clearly closer to the
sort of reaction people are used to than, for example, inputting keywords into
a search engine and then browsing through links in the results. However, the
shortcomings when using such chatbots as information seeking tools are well
known, including hallucination, being trained on biased data and, when
integrated with search engines, the same lack of quality control over
information found online, which Google and similar search tools have been
criticised for. Combining the ease of use of a chatbot with the quality control
of a bibliographic database does seem to be a holy grail for providers of
search resources. Here we were fortunate to hear from two people involved in
experimenting with integrating generative AI into search tools, namely
Christine Stohn of Clarivate and David Pride of the
Open University, who is involved with CORE (slides
available via Slideshare, and a paper co-authored by David on the same
topic is available from the OU's
repository).
Christine Stohn outlines the generative AI project she undertook at Clarivate (Simon Williams)
One
of the things that both described projects had in common was ensuring that the developed
chatbots generated the content of their outputs based on the academic
publications indexed in their respective search tools, rather than what the
chatbots had “learned” from their training data, to ensure accuracy. I had seen
examples of AI-enhanced search products where this appeared not to be the case,
where the chatbot generated an answer and then either cited a source that
stated something different to what the chatbot claimed was the case or
struggled to find a citation that supported what it said. Knowing that a
chatbot was only basing its answers on peer reviewed evidence would obviously
be reassuring for anyone looking for high quality information or supporting
others to do this. David reported that CORE-GPT, which unsurprisingly
integrates CORE with ChatGPT, produced answers where you could clearly see
which sources had been “copied and pasted” into the synthesised answer, helping
to overcome one of the factors behind my favourite analogy of using chatbots to
find information as being like fast food. You do not necessarily know where the
ingredients in the product you are consuming come from. CORE-GPT also admits
when it cannot find an answer rather than hallucinating one, which is obviously
a big step forward for chatbots from the information professional’s
perspective.
Both
case studies were illuminating on how such tools might be structured in order
to deliver desired outcomes. Christine mentioned that Clarivate’s project with
AI21 Labs indexed the academic content to be searched by paragraph, allowing
for quite a high level of granularity, with the search algorithm identifying
the most relevant paragraphs in the database to the content of the query and
extracting relevant content for synthesis in the answer. As I had found from
experimenting with using chatbots for searching, and as reported in the AI
literacy workshop, generative AI tools can lack precision in their outputs,
with responses sometimes not addressing all the concepts contained in a query,
so perhaps this kind of more elemental breakdown of content would remedy this.
Christine’s
presentation also contained enlightening insights on the practicalities of
working with large language models that can be useful to those of us who are
end users of such products. She noted that whilst setting up an LLM to heavily
prioritise accuracy was important for information seeking purposes, it can lead
to the model falling into repetition, though this can be fixed through
repetition filters or setting the level of creativity slightly higher. With
Copilot web search allowing for some level of control over the level of
creativity and precision, this is useful knowledge for undertaking everyday
tasks with such tools and it can be easily taught to others.
She,
also, gave food for thought on testing and evaluating generative AI, stressing
the importance of formulating relevant use cases – clear ideas about what
people are likely to want to do with a tool – and defining success criteria.
Having dipped my toe in this area alongside colleagues, I can see that whilst
we did well in terms of thinking about use cases, our approach to evaluating
success has perhaps been more instinctive, spontaneous and impressionistic. The
emphasis on explicitly defining what is required for an outcome to be viewed as
successful, fit for purpose or useful is definitely prompting me to reflect on
my experience of experimenting with AI tools, considering the strengths and
weaknesses of their outputs, identifying what we in our library need from such
products (and how this might be distinctive compared to colleagues working
elsewhere in our organisation), and encouraging others to do the same. This can
hopefully help us build expertise and confidence in evaluating such products in
the future and ensure that they meet our needs.
David Pride breaks down the nuts and bolts of how CORE-GPT works (Simon Williams)
Meanwhile,
David provided some really useful findings in terms of the performance of
CORE-GPT by academic field. Results indicated that it did best on topics
related to political science and physical and applied sciences and was weaker
on humanities and biology. On the face of it, the level of success by
discipline seems specific to each subject, and it is not clear that there are
obvious broader trends about the type of subject that it works best for. One
encouraging finding is that even in subjects where performance is weaker
answers retained a high level of trustworthiness as CORE-GPT was honest about
aspects of the question it was unable to find answers to. Such developments
seem to be a significant step forward in using this technology to find
information.
I
then headed upstairs for a breakout session led by Katrine Sundsbø of DOAJ.
It allowed participants to play “Open, Global, Trusted: The DOAJ game”, which
used a series of puzzles (literally a jigsaw in the case of the first one) to
teach us about the history of DOAJ, the values it was built on and the
development of its practices and tools. We only had a small group of around
half a dozen people partly because of a change to the location for the session
on the day and perhaps, also, because some people were the worse for wear and
didn’t fancy something so interactive. It turned out to be an ideal situation as
the play elements did require all participants, who took on the role of
different academic publishing stakeholders, to work together and having
everyone sat on the same table helped with that.
The
specific audience the game was targeted at was not that relevant to me in my
workplace – the Welsh Government has researchers, but their work is primarily
published as grey literature on our website. I am curious, however, about
knowledge management, and it struck me that this was not only an example of
gamification in education, but also storytelling, a popular and effective KM
technique. I could see how the game could explain the factors that were
embedded in the founding and development and DOAJ, and hopefully convey the
relevance of its project, approaches and tools to researchers in a more
engaging way that a simple lecture. I mentioned this to Katrine at the end of
the session and she acknowledged that this was an idea ingrained in the
development of it, indicating that it had evolved from a game into what she
described as a “gamified workshop”. She mentioned she had found that sometimes
people playing educational games focused so intently on the gameplay that they
did not pay that much attention to the messages the game intended to convey, so
the storytelling approach would hopefully help reinforce these.
The
pendulum swung once to collections development for my final breakout session of
the conference: “Author identity metadata: Why a Small Publisher Can Address a
Major Challenge”. It focused on work being undertaken at Lived Places
Publishing, a small academic publisher with an aim of providing a platform to
authors from under-represented backgrounds, to create metadata in order to
facilitate discovery of their work (slides
available via Slideshire, and a video of a version of the talk delivered as
a webinar last year is also
available online). I had not come across them before, so aside from
anything else, it was good to find out about a publisher whose output may
potentially diversify our collection. It was also good to hear about work being
done to make the perspectives of those from groups with histories of
discrimination more discoverable.
David Parker, founder of Lived Place Publishing, introduces their work on enriching author metadata. Co-presenters Tash Edmonds and Kadian Pow (left to right) look on from the desk (Simon Williams)
This
is particularly relevant to the Welsh Government in the context of our anti-racist Wales
action plan, which makes the lived experience of people from ethnic
minorities central to its approach to tackling racism. The plan also
acknowledges the importance of allowing people to express their identity in
whatever way they choose. It parallels the approach taken by Lived Places, who
believe that the enrichment of identity metadata needs to be author-led in
order to be successful and to ensure that labels are created with the consent
of who they describe. This shouldn’t be taken for granted in societies with
histories of racism, sexism and LGBT+ discrimination, as well as pressures for
the commercial exploitation of data.
Hearing
from Tash
Edmonds of ProQuest, who spoke as part of the panel and has engaged with
stakeholders involved in publishing on metadata, was particularly enlightening.
She mentioned that one of the themes that recurred in discussions was the
evolution of language and the need to modernise outdated terminology. This is
something I recognise from some bibliographic databases, which I have seen
using questionable subject headings that really need to be reviewed. It is also
particularly relevant to me as I am responsible for thesauri in our library
management system and one thing I have been meaning to do for a while is to
consult with our staff diversity networks around the terms used in our subject
headings. So, this was a timely reminder of the value of an inclusive approach
to our work that engages all sections of our user base.
We
then filed back into the auditorium for some inspirational words to send us on
our way back home. In “Revolutionary Leader: How to lead authentically in a
world that’s set up for you to follow”, standup comedian and life coach Shereen Thor drew on the lessons of her own
life to argue that the authentically-lived life is the good life: that we need
to live lives that are true to who we are as people, despite pressure from
others to fit in the moulds that they have created for us. She also suggested
that in doing this, we act as leaders for ourselves, which is a prerequisite
for being able to lead others.
Shereen Thor outlines the importance of being true to yourself (Simon Williams)
I
wholeheartedly agree with her perspective and could understand when she talked
about chafing against the career expectations of her Egyptian-American
immigrant family. Whilst my family have always been supportive of my choices,
there were times when I was younger I did not take as much responsibility for
my own decisions as I should have and I followed well-meaning advice that just
did not suit me. A feeling of being a square peg in a round hole was what led
me to librarianship in the first place, as I quit a job which was impossible (my
then managers found out when they struggled to secure a long-term replacement)
and looked for a career that suited my strengths. It was not quite the same
degree of change as going from the office 9-to-5 to being a stand-up as Shereen
did, but people did tell me at the time I left my job that I was brave. I did
not see it that way, because I knew my work was having a negative impact on my
mental health and that things were not going to improve unless I left. It was a
difficult period, but I am glad it prompted me to reassess everything and find
a career and environment, which suits me and gives me a sense of purpose.
So,
as I made my way back down south from Glasgow, what did I take away from the
conference? It introduced me to things I had not heard of, shed new light on
things I had already come across and reaffirmed the value of the critical
skills we, librarians, practice and advocate for. It reinforced the importance
of understanding the patrons we serve and the contexts they find themselves in
as well as consulting with them, and highlighted the importance of using new
technologies as tools to increase efficiency rather than supplanting our own
professional skills and judgement. Most of all, it was a great opportunity to
learn from the experience, work, and knowledge of others, and to pass that on
to colleagues. More generally, it was nice to feel part of a wider community of
professionals. It can feel isolating to work in a relatively small service in a
sector that does not have the numbers that can be found in academic and public
libraries, though I and some of my colleagues are keen to get involved in
opportunities to share knowledge with others. Major events like the UKSG
Conference make you feel part of something bigger and make connections between
what you do and the professional challenges you face, and those that others
elsewhere are working on. Which, I think, has a psychological benefit as well
as a practical one. All in all, not a bad few days spent.
Thanks
to UKSG and the sponsors for providing the places for me and my fellow award
winners. Congratulations to the UKSG team for pulling off the event so
successfully. I heard so many positive comments about the helpfulness of the
UKSG staff at the registration desk and the number of return attendees I spoke
to are testament to the hard work and organisation you have put in over the
years. Finally, special thanks are due to Bev Acreman and Elaine Koster – your
friendly words really helped to put me at ease and make me feel at home. Roll
on next year!
Comments
Post a Comment