A previous PR had a comment addressed, but I forgot to run rubocop until
after it had already merged. Oops.
Anyway, minor change to make rubocop happy again (but still ignoring
config.ru because it's a nightmare in there)
In the previous PR we moved some of the logic out of the electrum
webhook handler, but it was still hitting electrum to turn addresses
into transaction hashes. We wanted to do even less in the electrum
webhook path, so this just dumps the raw info into a queue and then
returns. It doesn't even check if the customerID and address match,
that's handled offline now and in bulk.
Then, we have this new hydrate script which reads from that dumb queue,
hits the electrum, and then pushes to the slightly-smarter "queue-like"
hash for the process_pending_btc_transactions script to handle.
There's a fundamental difference between the two that probably I
wouldn't have built if I'd started from scratch, but I don't hate now
that we're here. This hydrate script is meant to be run as a daemon,
blocking in redis and maintaining a PID file for monit to monitor,
whereas the process_pending script is run by cron and is meant to handle
a bulk of orders all together.
So this one can just chew through every address as it comes in, or if
we're doing a reassert it can truck along on its merry way working
through a huge backlog, and then the process_script will allow a batch
to accumulate to process all at once. And I think that's okay.
I may have overdesigned the pidfile stuff, but meh, it was fun.
One of the main things this does versus the current model is to just
linearalize what would otherwise be a parallel process. Previously we'd
have perhaps multiple webhooks coming in together during a reassert say,
and they'd all be hitting electrum and blocking electrum. Whereas this
basically makes it so only one address can be hitting electrum at a
time. And that's on purpose to kinda hope that this is a little more
gentle on the poor thing.
But if we did want to parallelize later, the easiest way would just be
to run this 3 times with 3 different lockfiles, and that would ensure
there was always 3 and work with the existing code as-is.
The previous code would successfully _skip_ the filtered items, as in it
wouldn't run the body of the map, but it would also just leave them
there, meaning they'd be there next time.
So it would successfully skip stuff, but that's not really what we
wanted...
This both skips and clears.
Christopher Vollick
created
0f743b2
Mark Outgoing Bitcoin Transactions as Ignored
Click to expand commit body
We have a bunch of transactions that get added to the queue every so
often that we know we're never going to care about, but every time we
re-assert to electrum we have to go back and hammer it about them
anyway over and over again.
So with this, we take the ones we know we're never going to care about
and cache that we don't care in redis, so electrum can stop hearing
about it.
And it hopefully can read those answers somewhat quickly per-chunk so
the entire bin can clear itself out more easily, all without having to
bug our special baby!
Christopher Vollick
created
1944a99
Enqueue All Electrum Transactions, Filter and Chunk in Process Pending
Click to expand commit body
This does two things:
The first is that it moves validation of the electrum transactions out
of the webhook and into the processing script.
The theory here is that we want the actual webhook to be lighter so
things like reasserts aren't as hard on the system, etc. So we could
have dumped into a queue to process later by doing these checks and then
pushing into the pending list when it looks legit.
But really the pending list _is_ a queue already. So rather than having
another queue and another background process to process it, let's just
have the bin we're already running do the checks before bothering
electrum! Then the electrum webhook is basically just dumping into the
pending list. This means that, on a reassert, every transaction we've
ever seen will end up in this queue, but if we had a background that
would effectively still be true, it would just be a different pre-queue
queue.
There's just one problem with this scheme, which is that the current
speed of the bin is not enough to actually make it through this much
data in any kind of reasonable time. So! I've also introduced batching
to this new process that runs the validity filters on an entire chunk at
a time, with the hope that we can chew through the vast majority of them
with only a few queries to the DB and Redis before bothering Electrum,
which is the slower and more fragile of them.
This could have been two patches, but I decided to include them together
so the old logic and the new logic are both next to each in the same
place for easier context. Otherwise there would have been a patch that
added a bunch of filters and stuff that would seem like they came out of
nowhere at best, and like they were doing unnecessary work because we
were already filtering things at worst. Then the next commit would just
be deleting that stuff, making the last one necessary.
This way you can see where the logic _used_ to live, and compare it to
the equivalent logic in the new place.
Oops, I deleted the `done` variable in the migration and forgot to put
it back, and then I didn't notice that I never passed through the
`confirmations` method to the electrum transaction, because it's used in
the bin logic, but none of the tested logic.
And since I'm now delegating two methods, I may as well use a real
delegator. I thought about it with the first one, but it seemed about 6
in one, half dozen in the other, but with two it starts swinging in the
direction of Forwardable for me.
One was just a typo that obviously should be fixed here, and the other
is a small policy change.
Compared to upstream hotfix, spaces and parens were added to keep
rubocop happy.
We had a few lines that were simply too long, so I broke them up.
Two of them I did by factoring out a parameter into a variable, which
felt a little weird, but I liked the overall flow of the line otherwise
and I didn't want to have the whole block need to change just because
the pattern was a few characters too long...
I think it's clear enough.
The `bin/process_pending_btc_transactions` script was getting a bit
heavy, so I pulled out a piece of it into its own class with tests.
This, along with one or two lines that were split for length, now means
there's no rubocop violations in bin/process_pending_btc_transactions.
It's still a bit heavy and has a few internal classes that are untested,
but it's still progress!
This started because Rubocop told me it didn't like OpenStruct.
Okay, whatever.
But then while writings a mock, I realized that the OpenStruct had a key
of "name", but the code looked like it was looking for a value called
"plan_name".
Weird.
With a bit more tracing it looks like the only thing this DB is used for
is `customer_plan`, and _that_ only gets called in the constructor,
which isn't being created in this test, it's being created in the setup.
And also it doesn't use the customer plan when a currency gets passed
in, since that's all it's using it for, and our tests are passing a
currency in.
So this mock was the wrong format, but that didn't matter because the
code that would have used it wasn't being run, but if it was run it'd be
run before this mock was constructed.
Rubocop tells me I don't need "::"
Obviously... but I wrote it...
I wonder if I used to have something else called Exception that this was
distinguishing from at one point...
Just doing what rubocop tells me.
For one, since I'm there, I used `then` to not fetch twice.
I could have translated more directly, but I figured I'd do this while
I'm here.
Two here, first is a variable shadowing error because I used `h` in the
map, and also the `each_with_object`.
I now use `s` on the outside.
Second is that while `[x].flatten.compact` seemed to do what I wanted,
it seems like `Array(x)` does _nearly_ the same thing.
One important difference between the two is how `Array` works if `x` is
a hash, but in my case it's not, so for my purposes they're the same.
Cleaner!
Apparently this is a thing?
Most of the emails I got don't have this, but one that I tested just
happened to, so I have to handle it!
We make sure there's always an array now, and then parse "each" of them,
and fail if none of them are for the interac email address.
I don't really care about the others, but I do worry at some point
they're going to stop supporting the address I like...
Either way, this works today.
They've changed the way their emails look, so I'm changing my scraper in
response!
New format is more blocky, so things are further out. The position of
the message has changed and is different between Manual and Automatic,
so I'm scanning paragraphs looking for "Message:" now.
Also, their newline formats have changed but as also inconsistent, so
some mailings have \r\n and others just \n...
This is _actually_ the part that broke emails so they all say they "have
the wrong paragraph structure". I mean, the old ones weren't going to
work anymore anyway, but they broke so fundamentally that it wasn't even
like "Can't find money" or something because everything looked like one
paragraph.
So the wording has changed, but I'm still trying to be over-specified
because I don't want a carefully worded "Message" to trick the system.
Anyway, this'll probably be fine until later it's not...
I've chosen 003 for the adapter and 002 for the card reader because, in
order of our acquisition we had the esim.me cards first, then the card
readers, and then the esim adapters.