A Practical Introduction to Semantic Search for Proposal Professionals
Search for what you mean not what you say
Web search engines are remarkably good at what they do. Full-text search engines actually pre-date the web, but they really came into their own with the need to find information across an ever expanding array of content on the internet. One of the first internet text crawlers that let users find any word in any web page was WebCrawler. It appeared as a harbinger of uncurated, content-based search without the need for categories or topic trees to guide you to the information you’re looking for.
Nowadays, nearly all internet search engines work this way, and improvements on this basic approach have given us the current ability to find just about anything we need on the web. Note, however, that this state-of-the-art search technology finds words in documents that match the words in search queries. It’s called ‘keyword matching’ or lexical search. Advanced search engines also look for variations and synonyms of search words so that when you search, the results represent a relevant set of content that often fits exactly what you had in mind.
Enterprise Search Challenges
At the scale of the internet, you can’t do better than Google at search. But in building an enterprise service, there are a different set of challenges. The amount of content in organizations is much smaller, of course, so it’s easier to index and operate on, but it’s also more narrow in range making finer distinctions more important. Keyword search does a reasonable job, but it requires that your query words exist in the documents. Keyword search has no understanding of what you are searching for; it can only match words.
In the enterprise setting, your search results can be greatly improved if the search engine understands your intent and can use the context of terms within a document. This approach is called ‘semantic search’ and can give you more relevant results since semantic search is based on the meanings of words rather than just the words themselves.
How Does Semantic Search Work?
The latest technology in semantic search is a technique called word embeddings. Word embeddings are mathematical structures that represent a collection of words. This structure captures the context of a word or phrase plus its semantic and syntactic relation to other words. Each word is represented as a vector, so that a computer can do calculations with it.
If you look up the definition of a vector, you’ll find that it’s a mathematical object that has both magnitude and direction. That’s not very helpful to understand how a word can be a vector unless you also understand that all the words across a big collection of documents are also represented as vectors, and they all exist within a vector space. The length (magnitude) and direction of the vector represent a particular word. Words that have similar meanings have similar vector representations and therefore end up close to each other in the vector space.
The vector space is key to how semantic search works. It’s relatively easy to imagine a vector in two-dimensional space. You’ve, no doubt, seen many examples of arrows in graphs that depict some function or value with respect to the two axes (or dimensions) of the graph. Word embeddings are similar but instead of two dimensions, they can have dozens or hundreds of dimensions. That’s not something we can visualize or even imagine easily, but machines can use multi-dimensional spaces to calculate things like word similarity. The figure that follows shows words projected into a three-dimensional space. Three dimensions is still well short of word embedding models, but it gives you a sense of how operations can occur in multiple dimensions.
Words of a Feather
The vector representation has to be learned by a computer by considering all of the words across all of the documents to understand the context each word appears in. There are different approaches for projecting words into a vector space, but they all rely on the fact that in natural language, words with similar meanings tend to appear in similar contexts. This general idea is called distributional semantics and it dates back to the middle of last century. One of the leaders in the field, J.R. Firth, illustrated the idea in a famous quote from the 1950s, “You shall know a word by the company it keeps.”
Why Is It Important to Understand How Semantic Search Works?
If you’re looking to implement solutions in your organization to enable you to leverage your existing knowledge more effectively, it’s important to understand the value delivered by keyword-based solutions vs. those delivering true semantic search capabilities. We’ve designed DraftSpark to use a combination of both keyword matching and semantic search, blending the two to get the best of both worlds.
In addition, in DraftSpark, we use pre-trained word embedding models that have been built by analyzing massive amounts of text from a variety of online sources. Our current research effort is focused on extending large pre-trained models with the content of individual organizations who use DraftSpark. Combining pre-trained and specific models will allow us to incorporate the general knowledge from pre-trained models with content from a particular organization. Our goal is to produce even more precise and exact matches that are well calibrated to each organization’s information and content needs.
The Risks of Amoral AI
The consequences of deploying automation without considering ethics could be disastrous
This article first appeared on the TechCrunch blog.
Artificial intelligence is now being used to make decisions about lives, livelihoods and interactions in the real world in ways that pose real risks to people.
We were all skeptics once. Not that long ago, conventional wisdom held that machine intelligence showed great promise, but it was always just a few years away. Today there is absolute faith that the future has arrived.
It’s not that surprising with cars that (sometimes and under certain conditions) drive themselves and software that beats humans at games like chess and Go. You can’t blame people for being impressed.
But board games, even complicated ones, are a far cry from the messiness and uncertainty of real-life, and autonomous cars still aren’t actually sharing the road with us (at least not without some catastrophic failures).
AI is being used in a surprising number of applications, making judgments about job performance, hiring, loans, and criminal justice among many others. Most people are not aware of the potential risks in these judgments. They should be. There is a general feeling that technology is inherently neutral — even among many of those developing AI solutions. But AI developers make decisions and choose trade-oﬀs that aﬀect outcomes. Developers are embedding ethical choices within the technology but without thinking about their decisions in those terms.
These trade-oﬀs are usually technical and subtle, and the downstream implications are not always obvious at the point the decisions are made.
The fatal Uber accident in Tempe, Arizona, is a (not-subtle) but good illustrative example that makes it easy to see how it happens.
The autonomous vehicle system actually detected the pedestrian in time to stop but the developers had tweaked the emergency braking system in favor of not braking too much, balancing a trade-oﬀ between jerky driving and safety. The Uber developers opted for the more commercially viable choice. Eventually autonomous driving technology will improve to a point that allows for both safety and smooth driving, but will we put autonomous cars on the road before that happens? Proﬁt interests are pushing hard to get them on the road immediately.
Physical risks pose an obvious danger, but there has been real harm from automated decision-making systems as well. AI does, in fact, have the potential to beneﬁt the world. Ideally, we mitigate for the downsides in order to get the beneﬁts with minimal harm.
A signiﬁcant risk is that we advance the use of AI technology at the cost of reducing individual human rights. We’re already seeing that happen. One important example is that the right to appeal judicial decisions is weakened when AI tools are involved. In many other cases, individuals don’t even know that a choice not to hire, promote, or extend a loan to them was informed by a statistical algorithm.
Buyers of the technology are at a disadvantage when they know so much less about it than the sellers do. For the most part decision makers are not equipped to evaluate intelligent systems. In economic terms, there is an information asymmetry that puts AI developers in a more powerful position over those who might use it. (Side note: the subjects of AI decisions generally have no power at all.) The nature of AI is that you simply trust (or not) the decisions it makes. You can’t ask technology why it decided something or if it considered other alternatives or suggest hypotheticals to explore variations on the question you asked. Given the current trust in technology, vendors’ promises about a cheaper and faster way to get the job done can be very enticing.
So far, we as a society have not had a way to assess the value of algorithms against the costs they impose on society. There has been very little public discussion even when government entities decide to adopt new AI solutions. Worse than that, information about the data used for training the system plus its weighting schemes, model selection, and other choices vendors make while developing the software are deemed trade secrets and therefore not available for discussion.
The Yale Journal of Law and Technology published a paper by Robert Brauneis and Ellen P. Goodman where they describe their eﬀorts to test the transparency around government adoption of data analytics tools for predictive algorithms. They ﬁled forty-two open records requests to various public agencies about their use of decision-making support tools.
Their “speciﬁc goal was to assess whether open records processes would enable citizens to discover what policy judgments these algorithms embody and to evaluate their utility and fairness”. Nearly all of the agencies involved were either unwilling or unable to provide information that could lead to an understanding of how the algorithms worked to decide citizens’ fates. Government record-keeping was one of the biggest problems, but companies’ aggressive trade secret and conﬁdentiality claims were also a signiﬁcant factor.
Using data-driven risk assessment tools can be useful especially in cases identifying low-risk individuals who can beneﬁt from reduced prison sentences. Reduced or waived sentences alleviate stresses on the prison system and beneﬁt the individuals, their families, and communities as well. Despite the possible upsides, if these tools interfere with Constitutional rights to due process, they are not worth the risk.
All of us have the right to question the accuracy and relevance of information used in judicial proceedings and in many other situations as well. Unfortunately for the citizens of Wisconsin, the argument that a company’s proﬁt interest outweighs a defendant’s right to due process was aﬃrmed by that state’s supreme court in 2016.
Fairness is in the eye of the beholder
Of course, human judgment is biased too. Indeed, professional cultures have had to evolve to address it. Judges for example, strive to separate their prejudices from their judgments, and there are processes to challenge the fairness of judicial decisions.
In the United States, the 1968 Fair Housing Act was passed to ensure that real-estate professionals conduct their business without discriminating against clients. Technology companies do not have such a culture. Recent news has shown just the opposite. For individual AI developers, the focus is on getting the algorithms correct with high accuracy for whatever deﬁnition of accuracy they assume in their modeling.
I recently listened to a podcast where the conversation wondered whether talk about bias in AI wasn’t holding machines to a diﬀerent standard than humans—seeming to suggest that machines were being put at a disadvantage in some imagined competition with humans.
As true technology believers, the host and guest eventually concluded that once AI researchers have solved the machine bias problem, we’ll have a new, even better standard for humans to live up to, and at that point the machines can teach humans how to avoid bias. The implication is that there is an objective answer out there, and while we humans have struggled to ﬁnd it, the machines can show us the way. The truth is that in many cases there are contradictory notions about what it means to be fair.
A handful of research papers have come out in the past couple of years that tackle the question of fairness from a statistical and mathematical point-of-view. One of the papers, for example, formalizes some basic criteria to determine if a decision is fair.
In their formalization, in most situations, diﬀering ideas about what it means to be fair are not just diﬀerent but actually incompatible. A single objective solution that can be called fair simply doesn’t exist, making it impossible for statistically trained machines to answer these questions. Considered in this light, a conversation about machines giving human beings lessons in fairness sounds more like theater of the absurd than a purported thoughtful conversation about the issues involved.
When there are questions of bias, a discussion is necessary. What it means to be fair in contexts like criminal sentencing, granting loans, job and college opportunities, for example, have not been settled and unfortunately contain political elements. We’re being asked to join in an illusion that artiﬁcial intelligence can somehow de-politicize these issues. The fact is, the technology embodies a particular stance, but we don’t know what it is.
Technologists with their heads down focused on algorithms are determining important structural issues and making policy choices. This removes the collective conversation and cuts oﬀ input from other points-of-view. Sociologists, historians, political scientists, and above all stakeholders within the community would have a lot to contribute to the debate. Applying AI for these tricky problems paints a veneer of science that tries to dole out apolitical solutions to diﬃcult questions.
Who will watch the (AI) watchers?
One major driver of the current trend to adopt AI solutions is that the negative externalities from the use of AI are not borne by the companies developing it. Typically, we address this situation with government regulation. Industrial pollution, for example, is restricted because it creates a future cost to society. We also use regulation to protect individuals in situations where they may come to harm.
Both of these potential negative consequences exist in our current uses of AI. For self-driving cars, there are already regulatory bodies involved, so we can expect a public dialog about when and in what ways AI driven vehicles can be used. What about the other uses of AI? Currently, except for some action by New York City, there is exactly zero regulation around the use of AI. The most basic assurances of algorithmic accountability are not guaranteed for either users of technology or the subjects of automated decision making.
Unfortunately, we can’t leave it to companies to police themselves. Facebook’s slogan, “Move fast and break things” has been retired, but the mindset and the culture persist throughout Silicon Valley. An attitude of doing what you think is best and apologizing later continues to dominate.
This has apparently been eﬀective when building systems to upsell consumers or connect riders with drivers. It becomes completely unacceptable when you make decisions aﬀecting people’s lives. Even if well-intentioned, the researchers and developers writing the code don’t have the training or, at the risk of oﬀending some wonderful colleagues, the inclination to think about these issues.
I’ve seen ﬁrsthand too many researchers who demonstrate a surprising nonchalance about the human impact. I recently attended an innovation conference just outside of Silicon Valley. One of the presentations included a doctored video of a very famous person delivering a speech that never actually took place. The manipulation of the video was completely imperceptible.
When the researcher was asked about the implications of deceptive technology, she was dismissive of the question. Her answer was essentially, “I make the technology and then leave those questions to the social scientists to work out.” This is just one of the worst examples I’ve seen from many researchers who don’t have these issues on their radars. I suppose that requiring computer scientists to double major in moral philosophy isn’t practical, but the lack of concern is striking.
Recently we learned that Amazon abandoned an in-house technology that they had been testing to select the best resumes from among their applicants. Amazon discovered that the system they created developed a preference for male candidates, in eﬀect, penalizing women who applied. In this case, Amazon was suﬃciently motivated to ensure their own technology was working as eﬀectively as possible, but will other companies be as vigilant?
As a matter of fact, Reuters reports that other companies are blithely moving ahead with AI for hiring. A third-party vendor selling such technology actually has no incentive to test that it’s not biased unless customers demand it, and as I mentioned, decision makers are mostly not in a position to have that conversation. Again, human bias plays a part in hiring too. But companies can and should deal with that.
With machine learning, they can’t be sure what discriminatory features the system might learn. Absent the market forces, unless companies are compelled to be transparent about the development and their use of opaque technology in domains where fairness matters, it’s not going to happen.
Accountability and transparency are paramount to safely using AI in real-world applications. Regulations could require access to basic information about the technology. Since no solution is completely accurate, the regulation should allow adopters to understand the eﬀects of errors. Are errors relatively minor or major? Uber’s use of AI killed a pedestrian. How bad is the worst-case scenario in other applications? How are algorithms trained? What data was used for training and how was it assessed to determine its ﬁtness for the intended purpose? Does it truly represent the people under consideration? Does it contain biases? Only by having access to this kind of information can stakeholders make informed decisions about appropriate risks and trade-oﬀs.
At this point, we might have to face the fact that our current uses of AI are getting ahead of its capabilities and that using it safely requires a lot more thought than it’s getting now.