That means, AI, and procurement — some ideas — The right way to Crack a Nut

James McKinney and Volodymyr Tarnay of the Open Contracting Partnership have revealed ‘A mild introduction to making use of AI in procurement’. It’s a very accessible and useful primer on a number of the most salient points to be thought-about when exploring the potential of utilizing AI to extract insights from procurement large information.

The OCP introduction to AI in procurement offers useful pointers in relation to activity identification, methodology, enter, and mannequin choice. I might add that an preliminary exploration of the likelihood to deploy AI additionally (and maybe at the start) requires cautious consideration of the extent of precision and the sort (and measurement) of errors that may be tolerated within the particular activity, and methods to check and measure it.

One of many essential and maybe extra obscure points coated by the introduction is how AI seeks to seize ‘which means’ with a view to extract insights from large information. That is additionally a controversial subject that retains developing in procurement information evaluation contexts, and one which triggered some heated debate on the Public Procurement Information Superpowers Convention final week—the place, for my part, firms promoting procurement perception providers had been peddling hyped claims (see session on ‘Transparency in public procurement – Information readability’).

On this put up, I enterprise some ideas on which means, AI, and public procurement large information. As at all times, I’m very keen on suggestions and alternatives for additional dialogue.

That means

After all, the idea of which means is advanced and open to philosophical, linguistic, and different interpretations. Right here I take a comparatively pedestrian and pragmatic method and, following the Cambridge dictionary, think about two methods by which ‘which means’ is known in plain English: ‘the which means of one thing is what it expresses or represents’, and which means as ‘significance or worth’.

To place it merely, I’ll argue that AI can’t seize which means correct. It may possibly carry advanced evaluation of ‘content material in context’, however we should always not equate that with which means. This shall be vital afterward.

AI, which means, embeddings, and ‘content material in context’

The OCP introduction helpfully addresses this subject in relation to an instance of ‘sentence similarity’, the place the researchers are in search of phrases which are alike in tender notices and predefined inexperienced standards, and subsequently need to use AI to match sentences and assign them a similarity rating. Intuitively, ‘which means’ can be vital to the comparability.

The OCP introduction explains that:

Computer systems don’t perceive human language. They should function on numbers. We are able to signify textual content and different info as numerical values with vector embeddings. A vector is an inventory of numbers that, within the context of AI, helps us categorical the which means of knowledge and its relationship to different info.

Textual content could be transformed into vectors utilizing a mannequin. [A sentence transformer model] converts a sentence right into a vector of 384 numbers. For instance, the sentence “don’t panic and at all times carry a towel” turns into the numbers 0.425…, 0.385…, 0.072…, and so forth.

These numbers signify the which means of the sentence.

Let’s examine this sentence to a different: “hold calm and always remember your towel” which has the vector (0.434…, 0.264…, 0.123…, …).

One solution to decide their similarity rating is to make use of cosine similarity to calculate the space between the vectors of the 2 sentences. Put merely, the nearer the vectors are, the extra alike the sentences are. The results of this calculation will at all times be a quantity from -1 (the sentences have reverse meanings) to 1 (identical which means). You could possibly additionally calculate this utilizing different trigonometric measures resembling Euclidean distance.

For our two sentences above, performing this mathematical operation returns a similarity rating of 0.869.

Now let’s think about the sentence “do you want cheese?” which has the vector (-0.167…, -0.557…, 0.066…, …). It returns a similarity rating of 0.199. Hooray! The pc is right!

However, this methodology shouldn’t be fool-proof. Let’s strive one other: “do panic and by no means convey a towel” (0.589…, 0.255…, 0.0884…, …). The similarity rating is 0.857. The rating is excessive, as a result of the phrases are related… however the logic is reverse!

I believe there are two vital observations in relation to using which means right here (highlighted above).

First, which means can hardly be captured the place sentences with reverse logic are thought-about very related. It is because the strategy described above (vector embedding) doesn’t seize which means. It captures content material (phrases) in context (round different phrases).

Second, it’s not potential to completely categorical in numbers what textual content expresses or represents, or its significance or worth. What the vectors seize is the illustration or expression of such which means, the illustration of its worth and significance via using these particular phrases within the specific order by which they’re expresssed. The string of numbers is thus a second-degree illustration of the which means supposed by the phrases; it’s a numerical illustration of the phrase illustration, not a numerical illustration of the which means.

Unavoidably, there may be a lot scope for loss, alteration and even inversion of which means when it goes via a number of imperfect processes of illustration. Because of this the extra open textured the expression in phrases and the much less contextualised in its presentation, the tougher it’s to realize good outcomes.

You will need to keep in mind that the present methods based mostly on this or related strategies, resembling these based mostly on massive language fashions, clearly fail on essential features resembling their factuality—which finally requires checking whether or not one thing with a given which means is true or false.

It is a burgeoning space of technnical analysis however evidently even essentially the most correct fashions are inclined to hover round 70% accuracy, save in extremely contextual non-ambiguous contexts (see eg D Quelle and A Bovet, ‘The perils and guarantees of fact-checking with massive language fashions’ (2024) 7 Entrance. Artif. Intell., Sec. Pure Language Processing). Whereas that is a powerful characteristic of those instruments, it could possibly hardly be acceptable to extrapolate that these instruments could be deployed for duties that require precision and factuality.

Procurement large information and ‘content material and context’

In some senses, the applying of AI to extract insights from procurement large information is properly suited to the truth that, by and huge, current procurement information may be very exactly contextualised and more and more issues structured content material—that’s, that many of the procurement information that’s (more and more) accessible is captured in structured notices and tends to have a narrowly outlined and extremely contextual function.

From that perspective, there may be potential to search for implementations of superior comparisons of ‘content material in context’. However it will most definitely have a tough boundary the place ‘which means’ must be interpreted or analysed, as AI can’t carry out that activity. At most, it could possibly assist collect the knowledge, but it surely can’t analyse it as a result of it can’t ‘perceive’ it.

Coverage implications

For my part, the above exhibits that the potential of utilizing AI to extract insights from procurement large information must be approched with warning. For duties the place a ‘broad brush’ method will do, these could be useful instruments. They may also help mitigate the informational deficit procurement coverage and apply are inclined to encounter. As put within the convention final week, these instruments may also help get a way of broad traits or instructions, and might thus inform coverage and decision-making solely in that regard and to that extent. Conversely, AI can’t be utilized in contexts the place precision is vital and the place errors would have an effect on vital rights or pursuits.

That is vital, for instance, in relation to the fascination that AI ‘enterprise insights’ appears to be triggering amongst public consumers. One of many points that stored developing issues why contracting authorities can’t profit from the identical advances which are touted as being provided to (non-public) tenderers. The case at hand was that of figuring out ‘enterprise alternatives’.

Plenty of firms are utilizing AI to help searches for contract notices to spotlight doubtlessly fascinating tenders to their purchasers. They provide providers resembling ‘tender summaries’, whereby the AI creates a one-line abstract on the idea of a contract discover or a young description, and this abstract could be robotically translated (eg into English). Additionally they provide search providers based mostly on ‘capturing which means’ from an organization’s web site and matching it to doubtlessly fascinating tender alternatives.

All these providers, nonetheless, are at backside a classy comparability of content material in context, not of which means. And these are deployed to go from extra to much less info (summaries), which may cut back issues with factuality and precision besides in excessive instances, and in a setting the place getting it mistaken has solely a marginal value (ie the corporate will put aside the non-interesting tender and transfer on). That is additionally an space the place expectations could be managed and the place outcomes properly under 100% accuracy could be fascinating and have worth.

The alternative doesn’t apply from the attitude of the general public purchaser. For instance, a abstract of a young is unlikely to have a lot worth as, with all probability, the abstract will merely affirm that the tender matches the marketed object of the contract (which has no worth, otherwise from a abstract suggesting a young matches the enterprise actions of an financial operator). Furthermore, factuality is extraordinarily vital and solely 100% accuracy will do in a context the place decision-making is topic to good administration ensures.

Subsequently, we should be very cautious about how we consider using AI to extract insights from procurement (large) information and, because the OCP introduction highlights, one of the vital vital issues is to obviously outline the duty for which AI can be used. For my part, there are way more restricted duties than one might dream up if we let our collective creativeness run excessive on hype.

Leave a Comment