奇しくも昨日エントリへの応答のようなタイトルだが、表題のNBER論文が上がっているungated版)。原題は「I Don't Know」で、著者はMatthew Backus(コロンビア大)、Andrew Little(UCバークレー)。

Experts with reputational concerns, even good ones, are averse to admitting what they don’t know. This diminishes our trust in experts and, in turn, the role of science in society. We model the strategic communication of uncertainty, allowing for the salient reality that some questions are ill-posed or unanswerable. Combined with a new use of Markov sequential equilibrium, our model sheds new light on old results about the challenge of getting experts to admit uncertainty – even when it is possible to check predictive success. Moreover, we identify a novel solution: checking features of the problem itself that only good experts will infer – in particular, whether the problem is answerable – allows for equilibria where uninformed experts do say “I Don’t Know.”(拙訳)


Can experts who care about perceptions of their competence be induced to admit uncertainty? The prior literature is bleak. We contribute by introducing a cheap talk model with an explicit focus on problem difficulty and showing that it creates room for positive results. Consistent with prior work, fact-checking experts – with the attendant threat of reputational consequences when wrong – is never enough to induce honesty. In the language of our model, checking whether experts are correct is not enough to get them to say “I don’t know." However, new to our setting, we show that if the decision-maker can learn ex-post whether the question was well-formulated, then the threat of catching the expert answering an unanswerable question can make experts willing to admit uncertainty.
自分たちの能力がどのように捉えられるかを気にする専門家に、不確実性を認めさせることができるだろうか? 従来の研究は否定的である。我々は、問題の困難さに明示的に焦点を当てたチープトーク*1モデルを導入し、それによって肯定的な結果の余地が生じることを示すことにより、この分野に貢献した。以前の研究と整合的に、事実関係を確認する専門家――それに伴う、間違った時に評判が落ちるという恐れ――だけでは、正直さを喚起するのに十分ではない。我々のモデルの用語で言えば、専門家が正しいかどうかをチェックすることは、彼らに「知らん」と言わしめるのに十分ではない。しかし、我々の設定の新規な面として、意思決定者が質問がきちんと定式化されていたかどうかを事後に学ぶことができれば、答えられるはずのない質問に答えていた、ということが明らかになる恐れにより、専門家に不確実性を認めさせることができる。


The literature that precedes us has shown the following to be robust: that when good experts receive imperfect signals and all experts have reputational concerns, it is difficult to induce honest strategic communication. We offer a simple intuition for this finding: predictive accuracy distinguishes the informed from the uninformed, not necessarily the good from the bad. If the decision-maker can learn about the problem itself they can generate informational asymmetries between good uninformed experts and bad ones, and more effectively incentivize honesty. Here we have focused on problem difficulty because we believe that decision-makers often ask unanswerable questions, but we also believe that this simple intuition may take other forms, and that further development of this idea is a fruitful area for future work.