Monday, October 30, 2017

Ambiguity

One of the problems with teaching computers to understand natural language, is that much of the meaning in what people say is actually hidden in what they don't say. As humans, we trivially interpret the meaning of ambiguous words, written or spoken, according to their context. For example, this blog post is published in a blog that largely discusses natural language processing, so if I write "NLP", you'd know I refer to natural language processing rather than to neuro-linguistic programming. If I told you that the blog post doesn't fit into a tweet because it's too long, you'd know that the blog post is too long and not that the tweet is too long. You would infer that even without having any knowledge about Twitter's character limit, because it just doesn't make sense otherwise. Unfortunately, common-sense and world knowledge that come so easily for us are not trivial to teach to machines. In this post, I will present a few cases in which ambiguity is a challenge in NLP, along with common ways in which we try to overcome it.


Polysemous words, providing material for dad jokes since... ever.
Lexical Ambiguity
Lexical ambiguity can occur when a word is polysemous, i.e. has more than one meaning, and the sentence in which it is contained can be interpreted differently depending on its correct sense.

For example, the word bank has two meanings - either a financial institute or the land alongside the river. When we read a sentence with the word bank, we understand which sense of bank the text refers to according to the context:

(1) Police seek person who robbed bank in downtown Reading.
(2) The faster-moving surface water travels along the concave bank.

In these example sentences, "robbed" indicates the first sense while "water" and "concave" indicate the second.



Existing Solutions for Lexical Ambiguity
Word embeddings are great, but they conflate all the different senses of a word into one vector. Since word embeddings are learned from the occurrences of a word in a text corpus, the word embedding for bank is learned from its occurrences in both senses, and will be affected from neighbors related to the first sense (money, ATM, union) and of the second (river, west, water, etc.). The resulting vector is very likely to tend towards the more common sense of bank, as can be seen in this demo: see how all the nearest words to bank are related to its financial sense.

Word Sense Disambiguation (WSD) is an NLP task aimed at disambiguating a word in context. Given a list of potential word senses for each word, the correct sense of the word in the given context is determined. Similar to the way humans disambiguate words, WSD systems also rely on the surrounding context. A simple way to do so, in a machine-learning based solution (i.e. learning from examples), is to represent a word-in-context as the average of its context word vectors ("bag-of-words"). In the example above, we get for the first occurrence of bankfeature_vector(bank) = 1/8( (vector(police) + vector(seek) + vector(person) + vector(who) + vector(robbed) + vector(in) + vector(downtown) + vector(reading))and for the second: feature_vector(bank) = 1/9(vector(the) + vector(faster) + vector(moving) + vector(surface) + vector(water) + vector(travels) + vector(along) + vector(the) + vector(concave)).



Can Google expand the acronym "ACL" correctly for me?
Acronyms
While many words in English are polysemous, things turn absolutely chaotic with acronyms. Acronyms are highly polysemous, some having dozens of different expansions. To make things even more complicated, as opposed to regular words, whose various senses are recorded in dictionaries and taxonomies like WordNet, acronyms are often domain-specific and not commonly known.

Take for example a Google search for "ACL 2017". I get results both for the Annual Meeting of the Association for Computational Linguistics (which is what I was searching for) and for the Austin City Limits festival. I have no idea whether this happens because (a) these are the two most relevant/popular expansions of "ACL" lately or the only ones that go with "2017"; or (b) Google successfully disambiguated my query, showing the NLP conference first, and leaving also the musical festival ranked lower in the search results, since it knows I also like music festivals. Probably (a) :)

Existing Solutions for Acronym Expansion
Expanding acronyms is considered a different task from WSD, in which there is no inventory of potential expansions for each acronym. Given enough context (e.g. "2017" is a context word for the acronym ACL), it is possible to find texts that contain the expansion. This can either be by searching for a pattern (e.g. "Association for Computational Linguistics (ACL)") or considering all the word sequences that start with these initials, and deciding on the correct one using rules or a machine-learning based solution.


Syntactic Ambiguity
No beginner NLP class is complete without at least one of the following example sentences:
  1. They ate pizza with anchovies
  2. I shot an elephant wearing my pajamas
  3. Time flies like an arrow
Common to all these examples is that each can be interpreted as multiple different meanings, where the different meanings differ in the underlying syntax of the sentence. Let's go over the examples.

The first sentence "They ate pizza with anchovies", can be interpreted as (i) "they ate pizza and the pizza had anchovies on it", which is the more likely interpretation, illustrated on the left side of the image below. This sentence has at least two more crazy interpretations: (ii) they ate pizza using anchovies (instead of using utensils, or eating with their hands), as in the right side of the image below, and (iii) they ate pizza and their anchovy friends ate pizza with them.

Visual illustration of the interpretations of the sentence "They ate pizza with anchovies".
Image taken from https://explosion.ai/blog/syntaxnet-in-context.
The first interpretation considers "with anchovies" as describing the pizza, while the other two consider it as describing the eating action. In the output of a syntactic parser, the interpretations will differ by the tree structure, as illustrated below.

Possible syntactic trees for the sentence "They ate pizza with anchovies", using displacy.

Although this is a classic example, both the Spacy and the Stanford Core NLP demos got it wrong. The difficulty is that syntactically speaking, both trees are likely. Humans know to prefer the first one based on the semantics of the words, and using their knowledge that anchovy is something that you eat rather than eat with. Machines don't come with this knowledge.

A similar parser decision is crucial in the second sentence, and just in case you haven't managed to find the funny interpretations yet: "I shot an elephant wearing my pajamas" has two ambiguities: first, does shoot mean taking a photo of, or pointing a gun to? (a lexical ambiguity). But more importantly, who's wearing the pajamas? Depending on whether wearing is attached to shot (meaning that I wore the pajamas while shooting) or to elephant (meaning that the elephant miraculously managed to squeeze into my pajamas). This entire scene, regardless of the interpretation, is very unlikely, and please don't kill elephants, even if they're stretching your pajamas.

The third sentence is just plain weird, but it also has multiple interpretations, of which you can read about here.

Existing Solutions for Syntactic Ambiguity
In the past, parsers were based on deterministic grammar rules (e.g. a noun and a modifier create a noun-phrase) rather than on machine learning. A possible solution to the ambiguity issue was to add different rules for different words. For more details, you can read my answer to Natural Language Processing: What does it mean to lexicalize PCFGs? on Quora.

Today, similarly to other NLP tasks, parsers are mostly based on neural networks. In addition to other information, the word embeddings of the words in the sentence are used for deciding on the correct output. So potentially, such a parser may learn that "eat * with [y]" yields the output in the left of the image if y is edible (similar to word embeddings of other edible things), otherwise the right one.


Coreference Ambiguity
Very often a text mentions an entity (someone/something), and then refers to it again, possibly in a different sentence, using another word. Take these two paragraphs from a news article as an example:

From https://www.theguardian.com/sport/2017/sep/22/donald-trump-nfl-national-anthem-protests.
The various entities participating in the article were marked in different colors.

I marked various entities that participate in the article in different colors. I grouped together different mentions of the same entities, including pronouns ("he" as referring to "that son of a bitch"; excuse my language, I'm just quoting Trump) and different descriptions ("Donald Trump", "the president"). To do that, I had to use my common sense (the he must refer to that son of a bitch who disrespected the flag, definitely not to the president or the NFL owners, right?) and my world knowledge (Trump is the president). Again, any task that requires world knowledge and reasoning is difficult for machines.

Existing Solutions for Coreference Resolution
Coreference resolution systems group mentions that refer to the same entity in the text. They go over each mention (e.g. the president), and either link it to an existing group containing previous mentions of the same entity ([Donald Trump, the president]), or start a new entity cluster ([the president]). Systems differ from each other, but in general, given a pair of mentions (e.g. Donald Trump, the president), they extract features referring either to each single mention (e.g. part-of-speech, word vector) or to the pair (e.g. gender/number agreement, etc.), and decide whether these mentions refer to the same entity.

Note that mentions can be proper-names (Donald Trump), common nouns (the president) and pronouns (he); identifying coreference between pairs of mentions from each type requires different abilities and knowledge. For example, proper-name + common noun may require world knowledge (Donald Trump is the president), while pairs of common nouns can sometimes be solved with semantic similarity (e.g. synonyms like owner and holder). Pronouns can sometimes be matched to their antecedent (original mention) based on proximity and linguistic cues such as gender and number agreement, but very often there is more than one possible option for matching.

A nice example of solving coreference ambiguity is the Winograd Schema challenge, of which I've first heard from this post in the Artificial Detective blog. In this contest, computer programs are given a sentence with two nouns and an ambiguous pronoun, and they need to answer which noun the pronoun refers to, as in the following example:

The trophy would not fit in the brown suitcase because it was too big. What was too big?
Answer 0: the trophy
Answer 1: the suitcase

Answering such questions requires, yes, you guessed correctly - commonsense and world knowledge. In the given example, the computer must reason that for the first object to fit into the second, the first object must be smaller than the second, so if the trophy could not fit into the suitcase, the trophy must be too big. Conversely, if instead of big, the question would have read small, the answer would have been "the suitcase".


Noun Compounds
Words are usually considered as the basic unit of a language, and many NLP applications use word embeddings to represent the words in the text. Word embeddings do a pretty decent job in capturing the semantics of a single word, and sometimes also its syntactic and morphological properties. The problem starts when we want to capture the semantics of a multi-word expression (or a sentence, or a document). The embedding of a word, for example dog, is learned from its occurrences in a large text corpus; the more common a word is, the more occurrences there are, and the higher the quality of the learned word embedding would be (it would be located "correctly" in the vector space near things that are similar to dog). A bigram like hot dog is already much less frequent, even less frequent is hot dog bun, and so on. The conclusion is clear - we can't learn embeddings for multi-word expressions the same way we do for single words.

The alternative is to try to somehow combine the word embeddings of the single words in the expression into a meaningful representation. Although there are many approaches for this task, there is no one-size-fits-all solution for this problem; a multi-word expression is not simply the sum of its single word meanings (hot dog is an extreme counter-example!).

One example out of many would be noun-compounds. A noun-compound is a noun that is made up of two or more words, which usually consists of the head (main) noun and its modifiers, e.g. video conference, pumpkin spice latte, and paper clip. The use of noun-compounds in English is very common, but most noun-compounds don't appear frequently in text corpora. As humans, we can usually interpret the meaning of a new noun-compound if we know the words it is composed of; for example, even though I've never heard of watermelon soup, I can easily infer that it is a soup made of watermelon.

Similarly, if I want my software to have a nice vector representation of watermelon soup, there is no way I can base it on the corpus occurrences of watermelon soup -- it would be too rare. However, I used my commonsense to build a representation of watermelon soup in my head -- how would my software know that there is a made of relation between watermelon and soup? This relation can be one out of many, for example: video conference (means), paper clip (purpose), etc. Note that the relation is implicit, so there is no immediate way for the machine to know what's the correct relation between the head and the modifier.1  To complicate things a bit further, many noun-compounds are non-compositional, i.e. the meaning of the compound is not a straightforward combination of the meaning of its words, as in hot dog, baby sitting, and banana hammock.

Existing Solutions for Noun-compound Interpretation
Automatic methods for interpreting the relation between the head and the modifier of noun-compounds have largely been divided into two approaches:

(1) machine-learning methods, i.e. hand-labeling a bunch of noun-compounds to a set of pre-defined relations (e.g. part of, made of, means, purpose...), and learning to predict the relation for unseen noun-compounds. The features are either related to each single word (head/modifier), such as their word vectors or lexical properties from WordNet, or to the noun-compound itself and its corpus occurrences. Some methods also try to learn a vector representation for a noun-compound in the form of applying a function to the word embeddings of its single words (e.g. vector(olive oil) = function(vector(olive), vector(oil))).

(2) finding joint occurrences of the nouns in a text corpus, some of which would explicitly describe the relation between the head and the modifier. For example "oil made of olives".

While there has been a lot of work in this area, success on this task is still mediocre. A recent paper suggested that current methods succeed mostly due to predicting the relation based solely on the head or on the modifier - for example, most noun-compounds with the head "oil" hold the made of relation (olive oil, coconut oil, avocado oil, ...). While this guess can be pretty accurate most of the times, it may cause funny mistakes as in the meme below.

From http://www.quickmeme.com/meme/3r9thy.

For the sake of simplicity, I focused on two-word noun-compounds, but noun-compounds with more than two words have an additional ambiguity - a syntactic ambiguity - what are the head-modifier relations in the compound? It is often referred to as bracketing. Without getting into too many details, consider the example of hot dog bun from before. It should be interpreted as [[hot dog][bun]] rather than [hot [dog bun]].





More to read?
Yeah, I know it was a long post, but there is so much more ambiguity in language that I haven't discussed. Here is another selected topic, in case you're looking for more to read. We all speak a second language called emoji, which is full of ambiguity. Here are some interesting articles about it: Emoji could cause confusion, trouble in the workplace, The real meaning of all those emoji in Twitter handles, Learning the language of emoji, and Why emojis may be the best thing to happen to language in the digital age. For the older people among us (and in the context of emoji, I consider myself old too, so no offence anyone), if you're not sure about the meaning of an emoji, why don't you check emojipedia first, just to make sure you're not accidentally using phallic symbols in your grocery list?


1 In this very interesting paper by Preslav Nakov there is a nice observation: a noun-compound is a "compression device" that allows saying more with less words. 

Wednesday, August 9, 2017

Paraphrasing

One of the things that make natural language processing so difficult is language variability: there are multiple ways to express the same idea/meaning. I mentioned it several times in this blog, since it is a true challenge for any application that aims to interact with humans. You may program it to understand common things or questions that a human may have, but if the human decides to deviate from the script and phrase it slightly differently, the program is helpless. If you want a good example, take your favorite personal assistant (Google assistant, Siri, Alexa, etc.) and ask it a question you know it can answer, but this time use a different phrase. Here is mine:


Both questions I asked have roughly the same meaning, yet, Google answers the first perfectly but fails to answer the second, backing off to showing search results. In fact, I just gave you a "free" example of another difficult problem in NLP which is ambiguity. It seems that Google interpreted showers as "meteor showers" rather than as a light rain.

One way to deal with the language variability difficulty is to construct a huge dictionary that contains groups or pairs of texts with roughly the same meaning: paraphrases. Then, applications like the assistant can, given a new question, look up the dictionary for any question they were programmed to answer which has the same meaning. Of course, this is a naive idea, given that language is infinite and one can always form a new sentence that has never been said before. But it's a good start, and it may help developing algorithms that can associate a new unseen text to an existing dictionary entry (i.e. generalizing). 

Several approaches have been used to construct such dictionaries, and in this post I will present some of the simple-but-smart approaches. 

Translation-based paraphrasing
The idea behind this approach is super clever and simple: suppose we are interested in collecting paraphrases in English. If two English texts are translated to the same text in a foreign language, then they are likely paraphrases of each other. Here is an example:

The English texts on the left are translated into the same Italian text on the right, implying that they have the same meaning.
This approach goes as far as 2001. The most prominent resource constructed with this approach is the paraphrase database (PPDB). It is a resource containing hundreds of millions of text pairs with roughly the same meanings. Using the online demo, I looked up for paraphrases of "nice to meet you", yielding a bunch of friendly variants that may be of use for conference small talks: 

it was nice meeting you
it was nice talking to you
nice to see you
hey, you guys
it's nice to meet you
very nice to meet you
nice to see you
i'm pleased to meet you
it's nice to meet you
how are you
i'm delighted
it's been a pleasure

Paraphrases of "nice to meet you", from PPDB.

In practice, all these texts appear as paraphrases of "nice to meet you" in the resource, with different scores (to what extent is this text a paraphrase of "nice to meet you"?). These texts were found to be translated to the same text in a single or in multiple foreign languages, and their scores correspond to the translation scores (as explained here), along with other heuristics.2  

While this approach provides a ton of very useful paraphrases, as you can guess, it also introduces errors, as in every automatic method. One type of an error occurs when the foreign word has more than one sense, each translating into a different, unrelated English word. For example, the Spanish word estacion has two meanings: station and season. When given a Spanish sentence that contains this word, it is translated (hopefully) to the correct English word according to the context. This paraphrase approach, however, does not look at the original sentences in which these words occur, but only at the phrase table -- a huge table of English phrases and their Spanish translations without their original contexts. It has therefore no way at this point to tell that stop and station refer to the same meaning of estacion, and are therefore paraphrases, while season and station are translations of two different senses of estacion.

Even without making such a horrible mistake of considering two texts as paraphrases when they are not related at all, paraphrasing is not well-defined, and the paraphrase relation encompasses many different relations. For example, looking for paraphrases of the word tired in PPDB, you will get equivalent phrases like fatigued, more specific phrases like overtired/exhausted, and related but not-quite-the-same phrases like bored. This may occur when the translator likes being creative and does not remain completely faithful to the original sentence, but also when the target language does not contain an exact translation for a word, defaulting in a slightly more specific or more general word. While this is not a specific phenomenon of this approach but rather of all the paraphrasing approaches (for different reasons), this has been studied by the PPDB people who did an interesting analysis of the different semantic relations the resource captures.


The following approaches focus on paraphrasing predicates. A predicate is a text describing an action or a relation between one or more entities/arguments, very often containing a verb. For example: John ate an apple or Amazon acquired Whole Foods. Predicate paraphrases are pairs of predicate templates -- i.e. predicates whose arguments were replaced by placeholders -- that would have roughly the same meaning given an assignment to their arguments. For example, [a]0 acquired [a]1 and [a]0 bought [a]are paraphrases given the assignment [a]= Amazon and [a]= Whole Foods.1  Most approaches focus on binary predicates (predicates with two arguments).

Argument-distribution paraphrasing
This approach relies on a simple assumption: if two predicates have the same meaning, they should normally appear with the same arguments. Here is an example:

In this example, the [a]0 slots in both predicates are expected to contain names of companies that acquired other companies while the [a]1 slot is expected to contain acquired companies. 

The DIRT method represents each predicate as two vectors: (1) the distribution of words that appeared in its [a]0 argument slot, and (2) the distribution of words that appeared in its [a]1 argument slot. For example, the [a]0 vectors of the predicates in the example will have positive/high values for names of people and names of companies that acquired other companies, and low values for other (small) companies and other unrelated words (cat, cookie, ...). To measure the similarity between two predicates, the two vector pairs ([a]0 in each predicate and [a]1 in each predicate) are compared using vector similarity measures (i.e. cosine similarity), and a final score averages the per-slot similarities.

Now, while it is true that predicates with the same meaning often share arguments, it is definitely not true that predicates that share a fair amount of their argument instantiations are always paraphrases. A simple counterexample would be of predicates with opposite meanings, that often tend to appear with similar arguments: for instance, "[stock] rise to [30]" and "[stock] fall to [30]" or "[a]0 acquired [a]1" and "[a]0 sold [a]1" with any [a]0 that once bought an [a]and then sold it.

Following this approach, other methods were suggested, such as capturing a directional inference relation between predicates (e.g. [a]0 shot [a]1 => [a]0 killed [a]1 but not vice versa), releasing a huge resource of such predicate pairs (see the paper); and a method to predict whether one predicate entails the other, given a specific context (see the paper). 

Event-based paraphrases
Another good source for paraphrases is multiple descriptions of the same news event, as various news reporters are likely to choose different words to describe the same event. To automatically group news headlines discussing the same story, it is common to group them according to the publication date and word overlap. Here is an example of some headlines describing the acquisition of Whole Foods by Amazon:


We can stop here and say that all these headlines are sentential paraphrases. However, going a step further, if we've already observed in the past Google to acquire YouTube / Google is buying YouTube as sentential paraphrases (and many other similar paraphrases), we can generalize and say that [a]0 to acquire [a]1 and [a]0 is buying [a]are predicate paraphrases.

Early works relying on this approach are 1, 2, followed by some more complex methods like 3. We recently harvested such paraphrases from Twitter, assuming that tweets with links to news web sites that were published on the same day are likely to describe the same news events. If you're interested in more details, here are the paper, the poster and the resource.

This approach is potentially more accurate than the argument-distribution approach. The latter assumes that predicates that often occur with the same arguments are paraphrases, while the former considers predicates with the same argument as paraphrases only if it believes that they discuss the same event.


What does the future hold? neural paraphrasing methods, of course. I won't go into technical details (I feel that there are enough "neural network for dummies" blog posts out there, and I'm by no means an expert on that topic). The idea is to build a model that reads a sequence of words and then generates a different sequence of words that has the same meaning. If it sounds like inexplicable magic, it is mostly because even the researchers working on this task can at most make educated guesses on why something works well or not. In any case, if this ever ends up working well, it will be much better than the resources we have today, since it will be capable of providing paraphrases / judging correctness of paraphrases for new texts that were never observed before.


1 Of course, given a different choice of arguments, these predicates will not be considered as paraphrases. For example, Mary acquired a skill is not a paraphrase of Mary bought a skill. The discussed approaches consider predicate-pairs as paraphrases, if there exists an argument assignment (/context) under which these predicates are paraphrases.   
2 See also more recent work on translation-based paraphrasing.  

Wednesday, March 1, 2017

Women in STEM*

This is a special post towards International Women's Day (March 8th). Every year I find myself enthusiastically conveying my thoughts about the topic to the people around me, so I thought I might as well share it with a broader audience. As always, this post presents my very limited knowledge/interpretation to a broadly discussed and studied topic. However, it may be a bit off topic for this blog, so if you're only interested in computational stuff, you can focus on section 3.

1. The Problem
Even though we are half of the population, women are quite poorly represented in STEM:

USA: the percentage of computing occupations held by women has been declining since 1991, when it reached a high of 36%. The current rate is 25%. [2016, here]

OECD member countries: While women account for more than half of university graduates in scientific fields in several OECD countries, they account for only 25% to 35% of researchers in most OECD countries. [2006, here]

2. The Causes (and possible solutions)

2.1 Cognitive Differences
There is a common conception that female abilities in math are biologically inferior to those of males. Many highly cited psychology papers prove differently, for example:

"Stereotypes that girls and women lack mathematical ability persist, despite mounting evidence of gender similarities in math achievement." [1].

"...provides evidence that mathematical and scientific reasoning develop from a set of biologically based cognitive capacities that males and females share. These capacities lead men and women to develop equal talent for mathematics and science." [2]

    From https://imgs.xkcd.com/comics/how_it_works.png.

    In addition, if cognitive differences were so prominent, there wouldn't be so many women graduating in scientific fields. It seems that the problem lies in occupational gender segregation, which may be explained by any one of the following:

    2.2 Family Life
    Here are some references from studies conducted about occupational gender segregation:

    "In some math-intensive fields, women with children are penalized in promotion rates." [3]
      "[...] despite the women's movement and more efforts in society to open occupational doors to traditional male-jobs for women, concerns about balancing career and family, together with lower value for science-related domains, continue to steer young women away from occupations in traditionally male-dominated fields, where their abilities and ambitions may lie." [4]

      "women may “prefer” those [jobs] with flexible hours in order to allow time for childcare, and may also “prefer” occupations which are relatively easy to interrupt for a period of time to bear or rear children." [5] (the quotation marks are later explained, indicating that this is not a personal preference but rather influenced by learned cultural and social values).

      I'd like to focus the discussion now on my local point view of the situation in Israel, since I suspect that it is the most prominent cause of the problem here. I would be very interested in getting comments regarding what it is like in other countries.

      From http://explosm.net/comics/2861/

      According to the Central Bureau of Statistics, in 2014, 48.9% of the workers in Israel were women (and 51.1% were men). The average salary was 7,439 NIS for women and 11,114 for men. Wait, what?... let me introduce another (crucial) factor.

      While the fertility rate has decreased in all other OECD member countries, in Israel it remained stable for the last decade, with an average of 3.7 children per family. On a personal note, as a married woman without children, I can tell you that it is definitely an issue, and "when are you planning to have children already?" is considered a perfectly valid question here, even from strangers (and my friends with 1 or 2 children often get "when do you plan to have the 2nd/3rd child?").

      Paid maternity leave is 14 weeks with a possibility (used by anyone who can afford it) to extend it to 3 more unpaid months. Officially, any one of the parents can take maternity leave, but in practice, since this law was introduced in 1998, only roughly 0.4% of the parents who took maternity leave were fathers. 

      Here is the number connecting the dots, and explaining the salary gap: in 2014, the average number of work hours per week was 45.2 for men and 36.7 for women. The culture in Israel is torn between the traditional family roles (mother as a main parent) and the modern opportunities for women. Most women I know have a career in the morning, and a second job in the afternoon with the kids. With a hard constraint of leaving work before 16:00 to pick up the kids, in a demanding market like in Israel, it is much harder for a woman to get promoted. It poses the high-tech industry, in which the working hours are known to be long, as a male-dominated environment. Indeed, in 2015, only 36.2% of the high-tech workers in Israel were women.

      This situation is doubly troubling: on the one hand, it is difficult for women who do choose demanding careers. They have to juggle between home and work in a way that men are never required to. On the other hand, we are oriented since childhood to feminine occupations that are less demanding in working hours. 

      Don't get me wrong, I'm not here to judge. Being a feminist doesn't entail that the woman must have a career while the man has to stay at home with the children. Each couple can decide on their division of labor as they wish. It's the social expectations and cultural bias that I'm against. I've seen this happening time after time: the man and the woman both study and build up their careers, they live in equality, and then the birth of their first child, and specifically maternity leave, is the slippery slope after which equality is a fantasy. 

      To make a long story short, I think it is not women the market is against, but mothers. When I say "against" I include allegedly good ideas such as allowing a mother to leave work at 16:00. While I'm not against leaving work at 16:00 (modern slavery is a topic for another discussion...), I don't see why this "privilege" should be reserved only for mothers. In my humble opinion, it will benefit mothers, fathers, children and the market if men and women could each get 3 days a week to leave work as "early" as at 16:00. It wouldn't hurt if both men and women will have the right to take parental leave together, developing their parenthood as a shared job. This situation will never change unless the market will overcome ancient society rules and stop treating parenthood as a job for women.

      2.3 Male-dominated Working Environments 
      Following the previous, tech workplaces (everywhere) are dominated by men, so that even women who choose to work in this industry might feel uncomfortable in their workplaces. Luckily for me I can't attest this by my own experience: I've never been treated differently as a woman, and have never felt threatened or uncomfortable in situations in which I was an only woman. This article exemplifies some of the things that other women experienced:

      "Many [women] will say that their voice is not heard, they are interrupted or ignored in meetings; that much work takes place on the golf course, at football matches and other male-dominated events; that progress is not based on merit and women have to do better than men to succeed, and that questions are raised in selection processes about whether a woman “is tough enough”."

      From http://www.phdcomics.com/comics/archive.php?comicid=490
        I've only become aware of these problems recently, so I guess it is both a good sign (that it might not be too common, or at least that not all women experience that), but also a bad sign (that many women still suffer from it and there's not enough awareness). This interesting essay written by Margaret Mitchell suggests some practical steps to make women feel more comfortable in their workplaces.

        Of course, things get much worse when you consider sexual harassment in workplaces. I know the awareness to the subject is very high today, an employer's duty to prevent sexual harassment is statutory in many countries, and many big companies require new employees to undergo a sexual harassment prevention training. While this surely mitigates the problem, it is still too common, with a disturbing story just from the last week (and many other stories untold). As with every other law, there will always be people breaking it, but it is the employers' duty to investigate any reported case and handle it even at the cost of losing a valuable worker.

        2.4 Gender Stereotypes 
        Simply because it's so difficult to change reality; even if some of the reasons why women were previously less likely to work in these industries are no longer relevant, girls will still be less oriented to working in these fields since they are considered unsuitable for them.

        From https://www.facebook.com/DoodleTimeSarah/

        An interesting illustration was provided in this work, where 26 girls (around 4 years old) were shown different Barbie dolls and asked whether they believed women could do masculine jobs. When the Barbie dolls were dressed in "regular" outfits, many of them replied negatively, but after being showed a Barbie dressed up in a masculine outfit (firefighter, astronaut, etc.), the girls believed that they too could do non-stereotypical jobs.

        This is the vicious circle that people are trying to break by encouraging young girls to study scientific subjects and supporting woman already working in these fields. Specifically, by organizing women-only conferences, offering scholarships for women, and making sure that there is a female representative in any professional group (e.g. panel, committee, etc). While I understand the rational behind changing the gender distribution, I often feel uncomfortable with these solutions. I'll give an example.

        Let's say I submitted a paper to the main conference in my field, and that paper was rejected. Then somebody tells me "there's a women-only workshop, why don't you submit your paper there?". If I submit my paper there and it gets accepted, how can I overcome the feeling of "my paper wasn't good enough for a men's conference, but for a woman's paper it was sufficient"?

        For the same reason, I'm uncomfortable with affirmative action. If I'm a woman applying for a job somewhere and I find out that they prefer women, I might assume that there was a man who was more talented/adequate than me but they settled for me because I was a woman. If that's true, it is also unfair for that man. In general, I want my work to be judged solely based on its quality, preferably without taking gender into consideration, for better and for worse.

        I know I'm presenting a naive approach and that in practice, gender plays a role, even if subconsciously. I also don't really have a better solution for that, but I do hope that if we take care of all the other reasons I discussed, this distribution will eventually change naturally. 

        3. Statistics and Bias
        Last year there was an interesting paper [6], followed by a lengthy discussion, about gender stereotypes in word embeddings. Word embeddings are trained with the objective of capturing meaning through co-occurrence statistics. In other words, words that often occur next to the same neighboring words in a text corpus are optimized to be close together in the vector space. Word embeddings have proved to be extremely useful for many downstream NLP applications.

        The problem that this paper presented was that these word embeddings capture also "bad" statistics, for example gender stereotypes with regard to professions. For instance, word embeddings have a nice property of capturing analogies like "man:king :: woman:queen", but these analogies contain also gender stereotypes like "father:doctor :: mother:nurse", "man:computer programmer :: woman:homemaker", and "he:she :: pilot:flight attendant".

        Why this is happening is pretty obvious - word embeddings are not trained to capture "truth" but only statistics. If most nurses are women, they would occur in the corpus next to words that are more likely to occur with feminine words than with masculine words, resulting in higher similarity between nurse and woman than nurse and man. In other words, if the input corpus reflects stereotypes and biases of society, so will the word embeddings.

        So why is this a problem, anyway? Don't we want word embeddings to capture the statistics of the real world, even the kind of statistics we don't like? If something should be bothering us, it is the bias in society, rather than the bias these word embeddings merely capture. Or in other words:



        I like this tweet because I was wondering just the same when I first heard about this work. The key concern about bias in word embeddings is that these vectors are commonly used in applications, and this might inadvertently amplify unwanted stereotypes. The example in the paper mentions web search aided by word embeddings. The scenario described is of an employer looking for an intern in computer science by searching for terms related to computer science, and the authors suggest that a LinkedIn page of a male researcher might be ranked higher in the results than that of a female researcher, since computer science terms are closer in the vector space to male names than to female names (because of the current bias). In this scenario, and in many other possible scenarios, the word embeddings are not just passively recording the gender bias, but might actively contribute to it!

        Hal Daumé III wrote a blog post called Language Bias and Black Sheep about the topic, and suggested that the problem goes even deeper, since corpus co-occurrences don't always capture real-world co-occurrences, but rather statistics of things that are talked about more often:

        "Which leads us to the "black sheep problem." We like to think that language is a reflection of underlying truth, and so if a word embedding (or whatever) is extracted from language, then it reflects some underlying truth about the world. The problem is that even in the simplest cases, this is super false."

        Prior to reading this paper (and the discussion and blog posts that followed it), I never realized that we are more than just passive observers of data; the work we do can actually help mitigate biases or inadvertently contribute to them. I think we should all keep this in mind and try to see in our next work whether it can have any positive or negative affect on that matter -- just like we try to avoid overfitting, cherry-picking, and annoying reviewer 2.


        References:
        [1] Cross-national patterns of gender differences in mathematics: A meta-analysis. Else-Quest, Nicole M.; Hyde, Janet Shibley; Linn, Marcia C. Psychological Bulletin, Vol 136(1), Jan 2010, 103-127.
        [2] Sex Differences in Intrinsic Aptitude for Mathematics and Science?: A Critical Review. Spelke, Elizabeth S. American Psychologist, Vol 60(9), Dec 2005, 950-958.
        [3] Women's underrepresentation in science: Sociocultural and biological considerations. Ceci, Stephen J.; Williams, Wendy M.; Barnett, Susan M. Psychological Bulletin, Vol 135(2), Mar 2009, 218-261. 
        [4] Why don't they want a male-dominated job? An investigation of young women who changed their occupational aspirations. Pamela M. Frome, Corinne J. Alfeld, Jacquelynne S. Eccles, and Bonnie L. Barber. Educational Research And Evaluation Vol. 12 , Iss. 4,2006
        [5] Women, Gender and Work: What Is Equality and How Do We Get There? Loutfi, Martha Fetherolf. International Labour Office, 1828 L. Street, NW, Washington, DC 20036, 2001.
        [6] Quantifying and Reducing Stereotypes in Word Embeddings. Tolga Bolukbasi, Kai-Wei Chang, James Zou, Venkatesh Saligrama, Adam Kalai. 2016 ICML Workshop on #Data4Good: Machine Learning in Social Good Applications.


        *STEM = science, technology, engineering and mathematics