-
Notifications
You must be signed in to change notification settings - Fork 12
Description
Passing in a large number of events to Journaled::Writer#enqueue! (as happens when using with_transactional_batching) can result in a stack level too deep error being raised. The root cause seems to be related to how the Ruby VM allocates initializer arguments when creating a new object. Specifically those arguments appear to be allocated on the stack instead of the heap, leading to the rather confusing error message about the stack level (instead of stack size) being exceeded.
This line in ActiveJob::ConfiguredJob appears to be the culprit: https://github.com/rails/rails/blob/7f309f0d481b8b6cd74e36eb16c2c031ba57bd76/activejob/lib/active_job/configured_job.rb#L15
Here's a fork with failing test case: jacamera@0eafa68
This is the most minimal reproduction I could come up with:
a = 100_000.times.to_a
class Foo
def initialize(*args)
puts "initialized Foo with #{args.count} arg(s)"
end
def bar(*args)
puts "called #bar with #{args.count} arg(s)"
end
end
Foo.new.bar(*a)
# initialized Foo with 0 arg(s)
# called #bar with 100000 arg(s)
Foo.new(*a).bar
# stack_splat.rb:15:in `new': stack level too deep (SystemStackError)
# from stack_splat.rb:15:in `<main>'I didn't have much luck tracking down any documentation on this behavior in Ruby and I'm not sure how to work around it here without changing the call signature of Journaled::DeliveryJob to accept an array of events rather than individual arguments. I guess we could add a new Journaled::BulkDeliveryJob or something if we wanted to maintain the API of the current job.