Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

bpo-19217: Speed up assertEqual on long sequences #27434

Open
wants to merge 35 commits into
base: main
Choose a base branch
from

Conversation

jdevries3133
Copy link
Contributor

@jdevries3133 jdevries3133 commented Jul 28, 2021

This incorporates changes started by @eamanu:

Additionally, I added two commits to make this ready-to-merge

  • revert changes to difflib.py
  • add regression test

https://bugs.python.org/issue19217

@sweeneyde
Copy link
Member

I'd recommend changing the PR title to something affirmative, e.g. "Speed up assertEqual on long sequences"

@jdevries3133 jdevries3133 changed the title bpo-19217: slow assertEq for moderate length list Speed up assertEqual on long sequences Jul 29, 2021
@jdevries3133
Copy link
Contributor Author

I'd recommend changing the PR title to something affirmative, e.g. "Speed up assertEqual on long sequences"

I will! That's definitely more descriptive, thank you.

Copy link
Contributor

@ambv ambv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

To sum up comments:

  • Please add the new variant to assertMultiLineEqual as well;
  • Please adapt the test cases to be pathological (you can use my suggestion) with the original code for both strings and lists;
  • Please change the test to simply execute the diff, don't measure time at all. If it becomes too slow, we'll know just by the fact it became super slow.

@bedevere-bot
Copy link

A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated.

Once you have made the requested changes, please leave a comment on this pull request containing the phrase I have made the requested changes; please review again. I will then notify any core developers who have left a review that you're ready for them to take another look at this pull request.

And if you don't make the requested changes, you will be poked with soft cushions!

@ambv ambv changed the title Speed up assertEqual on long sequences bpo-19217: Speed up assertEqual on long sequences Jul 29, 2021
@jdevries3133
Copy link
Contributor Author

jdevries3133 commented Jul 30, 2021

Please add the new variant to assertMultiLineEqual as well

@ambv before I go any further, I went ahead and made the change from ndiff to unified_diff in a different branch on my fork. Look at the README:

https://github.com/jdevries3133/cpython/tree/[bpo-19217](https://bugs.python.org/issue19217)-ideas

By the way, there is one other problematic use of ndiff in assertDictEqual:

cpython/Lib/unittest/case.py

Lines 1134 to 1136 in 48a6255

diff = ('\n' + '\n'.join(difflib.ndiff(
pprint.pformat(d1).splitlines(),
pprint.pformat(d2).splitlines())))

Reproducing the issue is similar to the others:

import unittest

class TestDictDiffSlow(unittest.TestCase):

    def test_slow_assert_equal_dict(self):
        d1 = { i : i * 2 for i in range(10000) }
        d2= { i : i for i in range(10000) }
        self.assertDictEqual(d1, d2)


if __name__ == '__main__':
    unittest.main()

I'm happy to fix all the failing tests, and there are quite a few, but the change from ndiff to unified_diff does change the diff messages a lot, as well. I know you noted this and are aware of it, but you can look at that branch to see more examples. Here are a few:

Before:

- Once upon a time,
?            ^^^^
+ Once upon an age,
?            ^^^^
  there was a boy named Tim.
- Here had some tiny knuckles
- and in his pocket, a slim jim.+ He had some wise old sages
+ who told him how to swim.

After:

AssertionError: 'Once upon a time,\nthere was a boy named Tim.\nHere [50 chars]jim.' != 'Once upon an age,\nthere was a boy named Tim.\nHe ha[44 chars]wim.'
--- +++ @@ -1,4 +1,4 @@
-Once upon a time,
+Once upon an age,
 there was a boy named Tim.
-Here had some tiny knuckles
-and in his pocket, a slim jim.
+He had some wise old sages
+who told him how to swim.

Before:

AssertionError: {'foo': 1, 'bar': 2} != {'foo': 2, 'bar': 2}
- {'bar': 2, 'foo': 1}
?                   ^

+ {'bar': 2, 'foo': 2}
?      

After:

AssertionError: {'foo': 1, 'bar': 2} != {'foo': 2, 'bar': 2}
--- 
+++ 
@@ -1 +1 @@
-{'bar': 2, 'foo': 1}
+{'bar': 2, 'foo': 2}

Other Options

I don't know if these are any good, but maybe there are other options to fix this bug:

Check size of Input

Check size of input value before passing to diff.ndiff. Use diff.unified_diff only for large inputs

  • This has the advantage of reducing test breakage
  • It also maintains the nice ergonomics of ndiff in most cases

Make ndiff a Better Generator

Patch ndiff to generate output (behave like a generator)

  • ndiff currently returns a generator, but it never seems to yield with the problematic inputs.
  • if it could yield output continuously without getting locked up, we could do this:
output_lines = []
for line in difflib.ndiff(a, b):
    if len(output_lines) > self.maxDiff:
        break
    output_lines.append(line)

Łukasz, I don't really understand enough to know if these are really paths forward, or if it's better to just forge ahead with replacing ndiff with unified_diff everywhere. I'm hoping you can provide some guidance!

@ambv
Copy link
Contributor

ambv commented Aug 2, 2021

Good thinking about having a threshold above which we can switch to unified_diff from ndiff. However, the problem isn't size of input but number of changes between them. As you saw, your original example was a list of 10,000 elements with a single change and that worked sub-second for both algorithms. The example I gave is a 1,000 elements and takes 150s because there are 10 changes.

I guess what I'm saying is that it isn't at all clear how our threshold should be calculated. How about this:

  • since unified_diff is always fast, just calculate it;
  • if there are fewer than, say, 25 lines starting with - or + in the resulting diff, then also run ndiff and return that instead.

As part of testing here you should deliberately try to come up with examples where the "25 diff lines" limit is still too big. If you can't find any after a good round of trying to break it, then maybe you can double the limit. And try breaking it then.

I mean, you can spend as much or as little time on this as you wish. But I agree with you that we can tweak this into a solution that is both fast and preserves nice diffs for most use cases.

* Now, we switch between difflib.ndiff and difflib.unified_diff based on
  input size.
* Threshold is sort of arbitrary, but seems to be working in the limited
  test cases I've written
* The full test suite is passing, with only very minor tweaks to
  existing tests!
@jdevries3133 jdevries3133 marked this pull request as draft August 4, 2021 03:34
@jdevries3133
Copy link
Contributor Author

jdevries3133 commented Aug 4, 2021

@ambv I converted this to a draft, but I do have an implementation of what we discussed now. You'll notice a few # ambv: ... comments sprinkled throughout. It seems like the idea is working well in general. Test breakage is very minimal, and tests are passing. I'm looking forward to hearing what you think, and thanks!

TODO

This is why I've marked it as a draft:

  • finish writing test_ndiff_to_unified_diff_breaking_point_varied_inputs
  • brainstorm and test more edge cases
  • incorporate feedback on this initial draft

Copy link
Contributor

@ambv ambv left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like where this is going. Looking forward to the next iteration.

Copy link
Member

@tim-one tim-one left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I haven't changed my mind since the last time I commented on one of the related issue reports (which was some years ago): if unittest is determined to compare strings, regardless of the types of the arguments passed to assertEqual(), then it should define the way it constructs such strings, and use its own dirt-simple, worst-case linear-time string diff algorithm. Just, e.g., look for the first mismatching character.

You can try to out-think difflib, but at best you'll make "the problem" less likely. If that's good enough for you, fine by me 😉. The approach here, to my eyes, should work pretty well for most cases.

Note that people here are generally comparing the very best case for unified diff to the very worst case for ndiff: no strings in common. Unified diff does only one layer of differencing, and if no elements are in common it has next to nothing to do. But ndiff does two layers: if there are no strings in common, it goes on to compare every string in one argument to every string in the other, to find a pair "closest to" matching.

But unified diff can be quadratic time too, when there are lots of matching subsequences.

It's convoluted to construct such cases, because the autojunk=True default works against it (but also damages the quality of the diffs). Here's a simple demonstration by forcing autojunk=False, using so-called "Fibonacci strings", which are a worst case for many string-matching algorithms:

from difflib import SequenceMatcher
fs = ['a', 'b']
from time import perf_counter as now
while True:
    a = fs[-2]
    b = fs[-1]
    start = now()
    r = SequenceMatcher(None, a, b, autojunk=False).ratio()
    finish = now()
    print(len(b), finish - start)
    fs.append(a + b)

The much-worse-than-linear behavior then quickly becomes obvious at non-trivial, but still smallish, string lengths:

...
144 0.0011279999998805579
233 0.0033622999999352032
377 0.008270400000583322
610 0.027310899999974936
987 0.05379209999955492
1597 0.1655982000002041
2584 0.4110897999998997
4181 1.1430293000003076
6765 2.670060499999636
10946 7.549530700000105
17711 19.679138700000294
28657 51.00185919999967
46368 137.25895500000024
75025 383.6125170999994
...

That was done on a far-from-quiet machine, but the point should be obvious enough.

@tim-one
Copy link
Member

tim-one commented Feb 8, 2022

Actually, using autojunk=False, it's dead easy to create a clearly quadratic-time case: two identical inputs, all consisting of a single repeated element. unified_diff() is just a thin wrapper around this basic use of SequenceMatcher:

from difflib import SequenceMatcher
from time import perf_counter as now
xs = 'x'
while True:
    start = now()
    sm = SequenceMatcher(None, xs, xs, autojunk=False)
    r = sm.ratio()
    finish = now()
    print(len(xs), finish - start)
    xs *= 2

@gpshead
Copy link
Member

gpshead commented Feb 8, 2022

You can try to out-think difflib, but at best you'll make "the problem" less likely. If that's good enough for you, fine by me 😉.

we've been calling difflib from assertEqual since probably ~2009. This issue & PR is just trying to improve upon corner cases where things go wrong for some people. Most people don't run into this or at least frustratingly work around it when they first do. this should ideally just reduce the need for that going forward.

an ultimate failsafe if the diff computation is taking too long would be to bail out of the computation after "a short while" and revert to a simple linear "first difference is here" message. doing that requires https://bugs.python.org/issue24904 unless diffing were to be sent to another process (bad idea) that could be killed.

unified diff is not "fast" just "less likely slow" or "slow in different circumstances".  so this doesn't ensure the problem never happens, it just reduces its chance.
gpshead
gpshead previously approved these changes Feb 8, 2022
Copy link
Member

@gpshead gpshead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I tweaked the wording in the NEWS entry a little.

@gpshead gpshead added performance Performance or resource usage type-bug An unexpected behavior, bug, or error labels Feb 8, 2022
approximately O((diff)^2), where `diff` is the product of the number of
differing lines, and the total length of differing lines. On the other
hand, unified_diff's cost is the same as the cost of producing `diff`
by itself: O(a + b).
Copy link
Member

@tim-one tim-one Feb 8, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note that this isn't true. Both have worst-case quadratic time, but on different kinds of input, and it's less likely to get provoked in the simpler kind of differencing unified_diff tries.

Well, actually actually 😉, the possible worst case of ndiff is worse than that, at least cubic time.

@tim-one
Copy link
Member

tim-one commented Feb 8, 2022

And another, to show that ndiff can easily be provoked into cubic time. On my box, it takes over 2 minutes to complete the case of 2 inputs with 512 lines each, where each line has under a dozen characters. But this is a best case for unified_diff, since no two lines are equal.

from difflib import ndiff
from time import perf_counter as now
import sys
sys.setrecursionlimit(2000) # so the 512 case completes
count = 16
while True:
    ax = [f"a{i:010d}" for i in range(count)]
    bx = [s.replace("a", "b") for s in ax]
    start = now()
    ignore = list(ndiff(ax, bx))
    finish = now()
    print(count, finish - start)
    count *= 2

Copy link
Member

@gpshead gpshead left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It sounds like we're basically trading one regressive performance situation for another.

Someone can always wind up unhappy. Without the ability to bail out if computation is taking too long, we should take Tim's suggestion of going to a linear "first differing element" highlight approach when inputs are "too large", followed by our generic a != b printing code.

I blindly suggest: max(len(a), len(b)) > 500 prevent calling unified_diff and max(len(a), len(b)) > 200 prevents calling ndiff. Just based on the worst case timings.

those default values used to tune that could be TestCase attributes that people could override themselves if they want (assumed to be rare), much like maxDiff is today.

@gpshead gpshead dismissed their stale review February 8, 2022 08:27

more to do

@tim-one
Copy link
Member

tim-one commented Feb 8, 2022

You can get sharper estimates by cheating a bit: build your own SequenceMatcher object:

sm = difflib.SequenceMatcher(None, a, b)

That much is worst-case linear time.

Then

b2j = sm.b2j
sum(len(b2j[line]) for line in a if line in b2j)

is the total number of times a unified diff will go around its innermost loop. It's cheating because b2j is an unadvertised implementation detail. In the dead-simple quadratic-time unified diff example I posted, that will return len(a)**2. Decide accordingly. In the cubic-time ndiff example I posted, it will return 0.

If that's "cheap enough", you can go on to take a guess at worst-case ndiff time (which is never cheaper than unified diff time). Go through sm.get_opcodes(), and look at each "replace" opcode. All and only those block pairs are attacked by ndiff (beyond what a unified diff does). The 5-tuple opcode is of the form:

("replace", i1, i2, j1, j2)

Then

len1 = i2 - i1
len2 = j2 - j1
ouch = len1 * len2 * min(len1, len2)

sets ouch to an upper bound on the number of times the "fancy replace" function will go around to process that block pair (including recursive calls). Sum that over all "replace" opcodes.

If that's "cheap enough", ndiff can be called without worry.

@tim-one
Copy link
Member

tim-one commented Feb 8, 2022

I should add that my sketch of ndiff's upper bound ignores the lan1 * len2 distinct diffs done, one for each pair of lines. Those are "1-level" unified-like diffs, so the upper bound on what those take could be computed via the other method. But computing len1 * len2 of those too threatens to make the worst case of upper-bound-finding challenge the worst case of actually doing the diff 😉.

As a practical matter, though, under the current implementation I think that can be ignored. autojunk defaults to True, and the fancy diff implementation doesn't override it. If there are many duplicate letters across both lines (where unified diff's worst cases come from), autojunk acts to ignore them entirely.

@jdevries3133
Copy link
Contributor Author

Wow, thank you Tim and Gregory – it's a joy to learn from your discussion on this issue; I'm having fun!!

So, here's basically what we have on the table:

To graph it out:

--- Option #1 ---
( current implementation )

                                 -- affordable --> ndiff
                                /
risky unified_diff cost calc ---
                                \
                                 -- too expensive --> unified_diff



--- Option #2 ---
( Tim's suggestion )
 
                          -- very pricy --> first_differing_element
                         /
O(n) custom cost calc --- --- somewhat pricy --> unified_diff
                         \
                          -- downright bargain --> ndiff

So, first things first, @tim-one, do you have any interest in having Option #2 being part of the difflib API? If so, there's nothing more to do here – the path forward would be to add that feature to difflib and consume it from unittest.

Otherwise, I feel like this is the path forward I'd take with this PR, should continuing appear to be the best option:

  • fix the inaccurate docstring tim pointed out
  • add test cases for unified_diff's worst case scenarios. Assert that the "autojunk heuristic" continues being used, which the suggested new heuristic depends on

Note that I'm thinking not to implement Tim's O(n) cost calculation and "first differing element" style diff for expensive inputs. I wouldn't rule it out, I just feel like this heuristic in combination with the autojunk heuristic doesn't edge cases behind, but that assumption might turn out to be incorrect – it'd need to be tested more thoroughly.

Tim and Gregory, what do you think?

@tim-one
Copy link
Member

tim-one commented Feb 9, 2022

No, I don't see folding any of this into difflib. This issue report has been open for nearly a decade for a reason 😉. That is, it's muddy, and there is no evident "solution".

Far as I'm concerned, the user wrote a low-quality test if the things it's asserting are equal are so large that it's not dead obvious what to show them if the assertion fails. I'm fine with dumping a million lines of text with no indication of where a difference may be - it's what they asked for, after all 😉.

As a thoroughly practical matter, why not just use unified diff, period? It's hard to provoke into pathological behavior, provided autojunk is allowed to turn itself on (which it will do, in truly otherwise-pathological cases).

It seems bizarre on the face of it too to expect a user to make sense of that there may be multiple diff formats produced, depending on things they have no understanding of.

ndiff was designed to produce high-quality diffs of source code files edited by humans, aimed at human consumption by many eyeballs. Cost was not a concern. A one-shot failing test doesn't merit that expense, or that consideration.

BTW, if you want to be more extreme, I was wrong: the b2j member of a SequenceMatcher object is a documented feature. So it's not "cheating" to use it.

@jdevries3133
Copy link
Contributor Author

Far as I'm concerned, the user wrote a low-quality test if the things it's asserting are equal are so large that it's not dead obvious what to show them if the assertion fails. I'm fine with dumping a million lines of text with no indication of where a difference may be - it's what they asked for, after all 😉.

The only issue is (if I understand correctly) that currently, millions of lines of text are not dumped as they're generated. Rather, the program hangs – because the big diff is not streamed to stdout as it's being generated; it's just being loaded into memory. Is there a way to just blast the diff into stdout as it's being generated instead of holding it in memory? This would be a better user experience imo, because at least the user can see what is happening.

To your point, users are obviously being a bit abusive of the testing library, but it should also be assumed that assertEqual will do something other than just lock up in linear time. Maybe we can dispatch diff generation to another thread, and quit generating the diff after a timeout? I agree that it's more of an issue if the test suite is gross and convoluted, but there are a lot of gross and convoluted tests out there. I see a lot of generators in difflib, but I'm not sure if something like this is possible?

I'm sorry to backpedal to entirely new solutions, and I know this isn't really the place to discuss it, but your criticism is 100% valid and I'd like to forge a path forward.... moreover, I want to slay the 9-year-old bug :(

As a thoroughly practical matter, why not just use unified diff, period? It's hard to provoke into pathological behavior, provided autojunk is allowed to turn itself on (which it will do, in truly otherwise-pathological cases).

I have tried this; it causes a lot of breakage of unittest's own test suite, which just makes it annoying, but it could be done. Hopefully asserting against failed test output is not a thing outside of unittest's own test suite.......

@gpshead
Copy link
Member

gpshead commented Feb 9, 2022

The only issue is (if I understand correctly) that currently, millions of lines of text are not dumped as they're generated. Rather, the program hangs – because the big diff is not streamed to stdout as it's being generated; it's just being loaded into memory. Is there a way to just blast the diff into stdout as it's being generated instead of holding it in memory?

Non-linear diff algorithms don't work that way. It isn't a sequential algorithm, it's trying to find a best/minimalish fit of differences. Otherwise the diff between 1 3 5 7 9 and 1 3 4 5 7 9 would be -5 -7 -9 +4 +5 +7 +9 done via a linear O(n) algorithm.

Sure someone would see it when running tests locally in their terminal, but so much of testing is done on CI test automation systems these days where it's all or nothing and you don't bother to get a result or see progress - you just wait for the entire suite to give a pass or fail and if it fails you look at its processed output and ultimately logs if needed.

Just use unified_diff is one solution as Tim notes. nothing wrong with that. it'd work (yay autojunk logic?)!

One other thing to consider is that we intentionally limit the size of printed artifacts for the base case of assertEqual via the https://github.com/python/cpython/blob/main/Lib/unittest/util.py common_shorten_repr code. We should do that with sequence comparisons as well. When the the sum of the sequence lengths is more items than some constant (make it a TestCase attribute to let people override it if they want - "400" sounds nice as it should never hang forever), skip diffing and just fall back to raw _baseAssertEqual() behavior. We could also add a TestCase attribute that is the diff function to use. The point being that it would allow people to override it and supply their own if desired.

100% agreed that people get away with writing test cases that go overboard today. but this is easy to do when writing tests that pass as it's only on failures winding up in a poor behavior corner case of the library that you'd even think about it being an issue. until then everything is extremely convenient to all involved.

@tim-one
Copy link
Member

tim-one commented Feb 9, 2022

Ironically, difflib used to - long ago - pump out differences "left to right" as it went along. The algorithm is based on finding the longest contiguous matching substring, then recursively do the same on the pieces "to the left" and "to the right" of that match. So there's nothing in theory to prevent delivering the deltas on the left part before the right part starts.

But as the comments in get_matching_blocks() (which everything else builds on) say, that blew the recursion stack in some extreme cases. So it was rewritten to work with an explicit stack of slice pairs to work on, and a list of partial matching-block results that's sorted into "increasing order" at the end. That's the wait-until-the-end choke point.

It's still the case that ndiff's "fancy replace" algorithm can blow the recursion stack, though.

In any case, the autojunk=False quadratic-time cases for a unified diff are hung during the very first call to find_longest_match(). and nothing of any kind can be delivered before that finishes.

@gpshead, autojunk is a really ugly hack, introduced purely for speed with insufficient thought. While it generally works well for comparing sequences of strings, it's often a disaster when comparing two long strings. In the latter case, almost every character "looks like junk", so will not be used as a synch point. The diffs that result can be of laughably poor quality. But, ya, you get them fast 😉. I'm not sure autojunk should exist at all, but it definitely should not have been enabled by default ☹️ .

As to spinning off a thread to do the diff, that's why this issue report will never be closed - everyone who looks at it eventually refuses to settle for a "good enough!" partial solution 😉.

Copy link
Member

@iritkatriel iritkatriel left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This has merge conflicts now.

@bedevere-bot
Copy link

A Python core developer has requested some changes be made to your pull request before we can consider merging it. If you could please address their requests along with any other requests in other reviews from core developers that would be appreciated.

Once you have made the requested changes, please leave a comment on this pull request containing the phrase I have made the requested changes; please review again. I will then notify any core developers who have left a review that you're ready for them to take another look at this pull request.

@jdevries3133
Copy link
Contributor Author

@tim-one I have not stopped thinking about this problem.

As to spinning off a thread to do the diff, that's why this issue report will never be closed - everyone who looks at it eventually refuses to settle for a "good enough!" partial solution 😉.

OK so it seems pretty well established that any heuristic will only achieve minimizing the set of possible inputs that would cause a big slowdown. My question to Tim and the other core developers in this thread is: do you see any possible solution that could potentially cause this ticket to be closed?

In this category (ticket closers), I think that spawning, babysitting, and possibly aborting a worker thread is possibly the most bulletproof idea. The only thing I worry about is whether python's execution model will cause the parent thread to be blocked while the diffing library dead spins, causing a race condition where the child can't be aborted. I think this wouldn't be the case, but I'm sure that someone here can advise as to whether that might be a problem.

On the broad topic of heuristics, it seems clear to me that any heuristic is effectively just a band-aid that won't block the full set of problematic inputs without being extremely aggressive, like the heuristic that @gpshead suggested:

I blindly suggest: max(len(a), len(b)) > 500 prevent calling unified_diff and max(len(a), len(b)) > 200 prevents calling ndiff. Just based on the worst case timings.

At the same time, I consider an extremely aggressive heuristic or simply abandoning ndiff entirely to be a breaking change. I believe that callers do have expectations about the STDOUT of python's unit tests, regardless of whether we consider that to be an officially stable part of the API. There are regression tests in our codebase that break when test output changes, further reinforcing this assertion.

To wrap up the heuristic discussion, it seems like the forgone conclusion is that heuristics are just icing on the cake – they don't address the first principles issues, and a heuristic too aggressive might constitute a breaking change.

Tim & other core devs in the thread, my ultimate question is, do you sponsor a thread-based solution as a complete solution to the root problem we're trying to address here? Is there any other solution you'd sponsor as a complete enough solution to close the ticket? If not, are there any solutions that you'd sponsor enough to want it to be merged since it improves the ergonomics here, even if it's not a complete solution?

@gpshead
Copy link
Member

gpshead commented Nov 27, 2022

No threads.

stdout from tests are not an API. Anyone depending on that has made a mere change detector, that's their problem. Regardless we wouldn't significantly change the output in a bugfix backport.

A heuristic to avoid potential regressive performance behaviors for something simpler is the best that can be done.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
awaiting changes performance Performance or resource usage type-bug An unexpected behavior, bug, or error
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet