Safety cases haven't been considered particularly important in knowledge based systems work but engineering of these systems is now sufficiently well advanced that safety is becoming an issue. There are two extremes - the view taken in some industrial safety-critical software standards that "AI methods" should be excluded en masse, versus the laissez faire attitude which can be observed by reading some of the vendor information. We shall discuss whether a more balanced view of safety cases might emerge and guess whether it is likely to be different from that for conventional software.