Decision-making strategies in animals emerge from the interplay between sensory evidence, contextual cues, and outcome history, shaped by both individual experience and structural priors adapted to natural environments. In two-alternative forced-choice (2AFC) tasks, rats exploit sequential trial correlations after correct trials but systematically ignore them after errors, a suboptimal behavior potentially reflecting such priors. To investigate the neural mechanisms linking behavior and network dynamics, we trained recurrent neural networks (RNNs) and low-rank RNNs (lrRNNs) in context-dependent perceptual categorization tasks. Standard RNNs readily integrated across-trial information after both correct and error trials, outperforming rats, while pre-training in more ecological tasks with multiple choices induced gating in otherwise optimal networks, mimicking rats’ post-error behavior. In contrast, lrRNNs with constrained connectivity exhibited diverse strategies depending on their rank. Notably, rank-2 models showed behavior similar to the rats’ outcome-dependent gating, whereas higher-rank models adopted more optimal strategies. Low-dimensional latent dynamics analyses revealed that networks could represent block contexts independently of prior outcomes, but that reward history modulated these representations. These results demonstrate that varying RNN connectivity systematically produces a range of identifiable strategies, including those observed in experiments, striking a balance between structural simplicity and behavioral complexity.