Add documentation to show how to use optillm with local inference server for getting logits. This is a commonly requested feature in ollama https://github.com/ollama/ollama/issues/2415 that is already supported in optillm and works well.