How to disable or reduce thinking

#13
by AyoubChLin - opened

Hi everyone,

I'm using AutoProcessor and AutoModelForImageTextToText, and the model often outputs reasoning / thinking text .

I'm trying to make the model respond with only the final answer or at least reduce the amount of reasoning it outputs.

Current setup:

from transformers import AutoProcessor, AutoModelForImageTextToText

processor = AutoProcessor.from_pretrained("Qwen/Qwen3.5-9B")

model = AutoModelForImageTextToText.from_pretrained(
    "Qwen/Qwen3.5-9B",
    torch_dtype="auto",
    device_map="auto",
    trust_remote_code=True
)

Questions:

  • Is there an official way to disable the thinking / reasoning output in Qwen3.5?
  • Is enable_thinking=False the recommended approach for this model?
  • Are there any other generation settings recommended to reduce reasoning and return concise answers?

Thanks in advance for the help!

You can't decide whether to disable inference at deployment time, you can do it at client request, for example, you can add a parameter to the data sent when using curl:

"chat_template_kwargs": {"enable_thinking": false}

full example:

 curl  http://localhost:8001/v1/chat/completions \
        -H "Content-Type: application/json" \
        -H "Authorization: xxx" \
        -d '{
      "model": "Qwen3.5-9B","stream":true, "chat_template_kwargs": {"enable_thinking": false},
      "messages": [
        {
          "role": "user",
          "content": "hello"
        }
      ]
    }'

I use Qwen3.5 without thinking process in this way. Use enable_thinking=False in tokenizer.apply_chat_template.

from transformers import AutoModelForCausalLM, AutoTokenizer
import re

model_id = "Qwen/Qwen3.5-4B"
tokenizer = AutoTokenizer.from_pretrained(model_id, trust_remote_code=True)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype=torch.bfloat16,
    trust_remote_code=True
)
messages = [
    {"role": "user", "content":"Say five countries in Africa."}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False, 
    enable_thinking=False, # DISABLE THINKING PROCESS
    add_generation_prompt=True
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=512,
    temperature=0
)
response = tokenizer.batch_decode(generated_ids, skip_special_tokens=True)[0]
raw_answer = response.split("assistant\n")[-1]
clean_answer = re.sub(r'<think>.*?</think>', '', raw_answer, flags=re.DOTALL).strip()

The thinking for Qwen3.5 is egregious. It needlessly burns thousands of tokens thinking itself in circles without any benefit over Qwen3 non-thinking. It's a waste of time and a waste of tokens. This 9b model in particular, but the same is true with the larger, less quantized versions.

Sign up or log in to comment