@siderea
Don't know about Sydney/Bing, but I read about someone getting ChatGPT to say it doesn't know just by adjusting the prompt. It was something along the lines of "Who is the main character in this story? Respond with 'I don't know' if unsure", which gave the model "permission" to admit it can't tell and it doesn't have to come up with a plausible but wrong answer every time.