Wikipedia:Reference desk/Mathematics: Difference between revisions

Content deleted Content added
Scsbot (talk | contribs)
edited by robot: adding date header(s)
 
Line 1:
[[Category:Non-talk pages that are automatically signed]]<noinclude>{{Wikipedia:Reference desk/header|WP:RD/MA}}
[[Category:Non-talk pages that are automatically signed]]
[[Category:Pages automatically checked for incorrect links]]
[[Category:Wikipedia resources for researchers]]
[[Category:Wikipedia help forums]]
[[Category:Wikipedia reference desk|Mathematics]]
[[Category:Wikipedia help pages with dated sections]]</noinclude>
 
= November 4 =
</noinclude>
 
== Name of distance function ==
{{Wikipedia:Reference_desk/Archives/Mathematics/2010 January 29}}
 
I have a distance function in my code. I know it has a name and a Wikipedia article (because I worked on the article), but I am old and the name of the function has skipped my mind. I'm trying to reverse search by using the formula to find the name of the function, but I can't figure out how to do it. So, what is the name of this distance function: {{math|1=d{{sub|ab}} = -lnΣ{{sqrt|a{{sub|i}}b{{sub|i}}}}}}. [[Special:Contributions/68.187.174.155|68.187.174.155]] ([[User talk:68.187.174.155|talk]]) 12:53, 4 November 2024 (UTC)
{{Wikipedia:Reference_desk/Archives/Mathematics/2010 January 30}}
 
:If <math>a=(0.6, 0.8)</math> and <math>b=(0.8,0.6),</math> the value of this measure is about <math>-0.326\,.</math> This does not make sense for an indication of distance. &nbsp;--[[User talk:Lambiam#top|Lambiam]] 15:02, 4 November 2024 (UTC)
{{Wikipedia:Reference_desk/Archives/Mathematics/2010 January 31}}
 
:My brain finally turned back on and I remembered it is an implementation of [[Bhattacharyya distance]]. [[Special:Contributions/68.187.174.155|68.187.174.155]] ([[User talk:68.187.174.155|talk]]) 15:52, 4 November 2024 (UTC)
= February 1 =
<small><small>welcome to the palyndrome day 01022010 --[[User:PMajer|pm]][[User talk:PMajer|<span style="color:blue;">a</span>]] 01:33, 1 February 2010 (UTC)</small></small>
 
::Normally when you call something a distance function it has to obey the [[Metric space#Definition|axioms of a metric space]]. Since Bhattacharyya distance applies only to probability distributions, the previous example would not be relevant. Still, the term "distance function" is used rather loosely since (according to the article) the Bhattacharyya distance does not obey the triangle inequality. The [[w:Category:Statistical distance]] has 38 entries, and I doubt many people are familiar with most of them. --[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 18:08, 4 November 2024 (UTC)
== What's the standard of proof to tell if an operation on a vector yields a vector? ==
:::When I was in college in the 70s, terminology was more precise. Now, many words have lost meaning. Using the old, some would say "prehistoric" terminology, a function is something that maps or relates a single value to each unique input. If the input is the set X, the function gives you the set Y such that each value of X has a value in Y and if the same value exists more than once in X, you get the same Y for it each time. Distance functions produce unbounded values. Similarity and difference functions are bounded, usually 0 to 1 or -1 to 1. Distance is usually bounded on one end, such as 0, and unbounded on the other. You can always get more distant. The distance function mentioned here is bounded on one end, but not the other. It does not obey triangle inequality, as you noted, so it is not a metric. Distance functions have to obey that to be metrics. Then, we were constantly drilled with the difference between indexes and coefficients. This function should be an index from my cursory read-through because it is logarithmic. If you double the result, you don't have double the distance. I've seen all those definitions that used to be important fade away over the decades, so I expect that it doesn't truly matter what the function is called now. [[Special:Contributions/12.116.29.106|12.116.29.106]] ([[User talk:12.116.29.106|talk]]) 16:12, 5 November 2024 (UTC)
::::This could be a pretty good standup routine if you added some more material. You could call it "Hey you kids, get off my ln!" [[Special:Contributions/100.36.106.199|100.36.106.199]] ([[User talk:100.36.106.199|talk]]) 02:09, 10 November 2024 (UTC)
:::The [[Kullback-Leibler divergence]] has been around longer than any of us, I'm pretty sure, and it's called a divergence rather than a distance because it doesn't have all the properties of a metric in a metric space. Particularly, d(a,b)≠d(b,a) iirc. So I think fussiness about the term "distance" is not something new. [[Special:Contributions/2601:644:8581:75B0:0:0:0:2CDE|2601:644:8581:75B0:0:0:0:2CDE]] ([[User talk:2601:644:8581:75B0:0:0:0:2CDE|talk]]) 23:26, 14 November 2024 (UTC)
 
= November 8 =
I know that just returning a triplet of components doesn't mean an operation has yielded a vector. So given some operation @ where <b>A</b> @ <b>B</b> yields a triplet of numbers C<sub>x</sub>,C<sub>y</sub>,C<sub>z</sub>, what's a property these three numbers will have only if C is a vector?[[Special:Contributions/71.161.63.23|71.161.63.23]] ([[User talk:71.161.63.23|talk]]) 02:29, 1 February 2010 (UTC)
 
== finding an equation to match data ==
:I don't understand. What is the difference between three components and a vector in three-dimensional space? You can usually treat them as equivalent. —[[User:Bkell|Bkell]] ([[User talk:Bkell|talk]]) 04:16, 1 February 2010 (UTC)
::It seems the OP is using the definition of "vector" described [[Euclidean_vector#Vectors.2C_pseudovectors.2C_and_transformations|here]]. --[[User:Tango|Tango]] ([[User talk:Tango|talk]]) 11:38, 1 February 2010 (UTC)
 
An experiment with accurate instruments resulted in the following data points:-
The question seems somewhat unclear, but here's a guess as to what is meant: a triple of numbers represents a vector relative to some basis (or "coordinate system"). Suppose the input is one or more triples of scalars, and the output is a triple of scalars. Suppose you change the coordinate system (or the "basis") and then put in the same input but in the new coordinate system. Look at the output. Does it or doesn't it represent the same vector that you got before, but in the new coordinate system?
{{pre|
:
x, y
One could ask this about cross-products, for example. For those, the answer would be a bit of algebraic definition chasing. Before going into that in detail, maybe we should await further clarification of the question. [[User:Michael Hardy|Michael Hardy]] ([[User talk:Michael Hardy|talk]]) 04:22, 1 February 2010 (UTC)
0.080, 0.323;
0.075, 0.332;
0.070, 0.347;
0.065, 0.368;
0.060, 0.395;
0.055, 0.430;
0.050, 0.472;
0.045, 0.523;
0.040, 0.587;
0.035, 0.665;
0.030, 0.758;
0.025, 0.885;
0.020, 1.047;
0.015, 1.277;
0.010, 1.760.
}}
 
How can I obtain a formula that reasonably matches this data, say within 1 or 2 percent? <br>
:My best-guess interpretation of the question agrees with Michael's. Another way to look at it is to say that the result of an operation is a vector if and only if the operator commutes with the linear transformations L(A<sub>x</sub>,A<sub>y</sub>,A<sub>z</sub>) that represent changes of basis when they act on the co-ordinates on vectors. In other words
At first glance, it looks like a k1 + k2*x^-k3 relationship, or a k18x^k2 + k3*x^-k4 relationship, but they fail at x above 0.070. Trying a series such as e(k1 + k2x +k3x^2) is also no good.
-- [[User:Dionne Court|Dionne Court]] ([[User talk:Dionne Court|talk]]) 03:14, 8 November 2024 (UTC)
 
:Thank you CiaPan for fixing the formatting. [[User:Dionne Court|Dionne Court]] ([[User talk:Dionne Court|talk]]) 15:12, 8 November 2024 (UTC)
::<math>L(A_x,A_y,A_z) \otimes L(B_x,B_y,B_z) = L((A_x,A_y,A_z) \otimes (B_x,B_y,B_z))</math>
::Plotting 1/y against x it looks like a straight line, except there is a rather dramatic hook to the side starting around x=.075. This leads me to suspect that the last two entries are off for some reason; either those measurements are off or there's some systematic change in the process going on for large x. Part of the problem is that you're not giving us any information about where this information is coming from. I've heard it said, "Never trust data without error bars." In other words, how accurate is accurate, and might the accuracy change depending on the input? Is there a reason that the values at x≥.075 might be larger than expected. If the answer to the second is "Yes" then perhaps a term of the form (a-x)^k should be added. If the answer is "No" then perhaps that kind of term should not be added since that adds more parameters to the formula. You can reproduce any set of data given enough parameters in your model, but too many parameters leads to [[Overfitting]], which leads to inaccurate results when the input is not one of the values in the data. So as a mathematician I could produce a formula that reproduces the data, but as a data analyst I'd say you need to get more data points, especially in the x≥.075 region, to see if there's something real going on there or if it's just a random fluke affecting a couple data points. --[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 15:58, 8 November 2024 (UTC)
 
::PS. I tried fitting 1/y to a polynomial of degree four, so a model with 5 parameters. Given there are only 15 data points, I think 5 parameters is stretching it in terms of overfitting, but when I compared the data with a linear approximation there was a definite W shaped wobble, which to me says degree 4. (U -- Degree 2, S -- Degree 3, W -- Degree 4.) As a rough first pass I got
:So the result of
:::1/y ≃ 0.1052890625+54.941265625x-965.046875x<sup>2</sup>+20247.5x<sup>3</sup>-136500x<sup>4</sup>
::with an absolute error of less than .01. The methods I'm using aren't too efficient, and there should be canned curve fitting programs out there which will give a better result, but I think this is enough to justify saying that I could produce a formula that reproduces the data. I didn't want to go too much farther without knowing what you want to optimize, relative vs. absolute error, least squares vs. min-max for example. There are different methods depending the goal, and there is a whole science (or perhaps it's an art) of [[Curve fitting]] which would impractical to go into here. --[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 18:26, 8 November 2024 (UTC)
:::Thak you for your lengthy reply.
:::I consider it unlikely that the data inflexion for x>0.07 is an experimental error. Additional data points are :-
:::x, y: 0.0775, 0.326; 0.0725, 0.339.
:::The measurement was done with digital multimeters and transducer error should not exceed 1% of value. Unfortunately the equipment available cannot go above x=0.080. I only wish it could. Choosing a mathematic model that stays within 1 or 2 percent of each value is appropriate.
:::As you say, one can always fit a curve with an A + Bx + Cx^2 + Dx^3 .... to any given data. But to me this is a cop-out, and tells me nothing about what the internal process might be, and so extrapolation is exceedingly risky. Usually, a more specific solution when discovered requires fewer terms. ```` [[User:Dionne Court|Dionne Court]] ([[User talk:Dionne Court|talk]]) 01:49, 9 November 2024 (UTC)
::::When I included the additional data points, the value at .0725 was a bit of an outlier, exceeding the .01 absolute error compared to the estimate, but not by much. --[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 18:55, 9 November 2024 (UTC)
::::FWIW, quite a few more data points would almost certainly yield a better approximation. This cubic equation seems pretty well-behaved:
::::<math>-8549.55x^3 + 1550.8x^2 - 96.9307x + 2.49283</math> [[User:Earl of Arundel|Earl of Arundel]] ([[User talk:Earl of Arundel|talk]]) 02:28, 10 November 2024 (UTC)
:Some questions about the nature of the data. Some physical quantities are necessarily nonnegative, such as the mass of an object. Others can also be negative, for example a voltage difference. Is something known about the theoretically possible value ranges of these two variables? Assuming that x is a controlled value and y is an experimentally observed result, can something be said about the theoretically expected effect on y as x approaches the limits of its theoretical range? &nbsp;--[[User talk:Lambiam#top|Lambiam]] 15:59, 9 November 2024 (UTC)
::As x approaches zero, y must approach infinity.
::x must line between zero and some value less than unity.
::If you plot the curve with a log y scale, by inspection it seems likely that y cannot go below about 0.3 but I have no theoretical basis for proving that.
::However I can say that y cannot ever be negative.
::The idea here is to find/work out/discover a mathematically simple formula for y as a function of x to use as a clue as to what the process is. That's why a series expansion that does fit the data if enough terms are used doesn't help.[[User:Dionne Court|Dionne Court]] ([[User talk:Dionne Court|talk]]) 01:33, 10 November 2024 (UTC)
:::So as x approaches zero, 1/y must also approach zero. This is so to speak another data point. Apart from the fact that the power series approximations given above provide no theoretical suggestions, they also have a constant term quite distinct from 0, meaning they do not offer a good approximation for small values of x.
:::If you plot a graph of x versus 1/y, a smooth curve through the points has two [[points of inflection]]. This suggests (to me) that there are several competing processes at play. &nbsp;--[[User talk:Lambiam#top|Lambiam]] 08:08, 10 November 2024 (UTC)
::::The x=0, 1/y=0 is an important data point that should have been included from the start. I'd say it's the most important data point since a) it's at the endpoint of the domain, and b) it's the only data point there the values are exact. Further theoretical information near x=0 would be helpful as well. For example do we know whether is y is proportional to x<sup>-a</sup> near x=0 for a specific a, or perhaps - log x? If there is no theoretical basis for determining this then I think more data points near x=0, a lot more, would be very helpful. The two points of inflection match the W (or M) shape I mentioned above. And I agree that it indicates there are several interacting processes at work here. I'm reminded of solubility curves for salts in water. There is an interplay between energy and ionic and Van der Waals forces going on, and a simple power law isn't going to describe these curves. You can't even assume that they are smooth curves since [[Sodium sulfate]] is an exception; its curve has an abrupt change of slope at 32.384 °C. In general, nature is complex, simple formulas are not always forthcoming, and even when they are they often only apply to a limited range of values. --[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 15:46, 10 November 2024 (UTC)
:::::I have no theoretical basis for expecting that y takes on a particular slope or power law as x approaches zero.
:::::More data points near x = 0 are not a good idea, because transducer error will dominant. Bear in mind that transducer error (about 1%) applies to both x and y. Near x = 0.010 a 1% error in x will lead to a change in y of something like 100% [(1.760 - 1.277)/(0.015 - 0.010)]. The value of y given for x = 0.010 should be given little weight when fitting a curve.[[User:Dionne Court|Dionne Court]] ([[User talk:Dionne Court|talk]]) 02:03, 11 November 2024 (UTC)
:::::It seems to me that one should assume there is a simple relationship at play, with at most three competing processes, as otherwise there is no basis for further work. If it is a case of looking for the lost wallet under the lamp post because the light is better there, so be it, but there is no point in looking where it is dark.
:::::Cognizant of transducer error, a k1 + k2*x^-k3 relationship fits pretty good, except for a divergence at x equal and above 0.075, so surely there are only 2 competing processes? [[User:Dionne Court|Dionne Court]] ([[User talk:Dionne Court|talk]]) 02:03, 11 November 2024 (UTC)
 
::<math>(A_x,A_y,A_z) \otimes (B_x,B_y,B_z) = (A_x+B_x, A_y+B_y, A_z+B_z)</math>
 
= November 11 =
:produces the vector A+B, but the result of
==Strange behavior with numbers in optimization==
Hello everyone, I have encountered some very strange issue with my optimization function, and I am not sure how to resolve. I am working on a numerical methods library, where I am trying to approximate the growth of a sequence, which has some relation to prime number distributions. However, when I use large values of n (especially for n > 10^6), the result of my function starts to behave very erratically. It is not random, but it has this strange oscillation or jump.
I use recurrence relation for this approximation, but when n becomes large, the output from function suddenly grows or shrinks, in way that is not consistent with what I expect. Even when I check for bounds or add better convergence criteria, the error persists. pattern looks similar to the behavior of prime numbers, but I am not directly calculating primes.
I apologize if this sounds too speculative, but has anyone faced similar issues with such strange behavior in large-scale numerical computations? I am quite confused about what is causing the error.
TL;DR:
I am optimizing function related to number theory, but results become unpredictable when n > 10^6.
Errors show strange oscillation, similar to distribution of primes, though I do not directly calculate primes.
Thank you very much for your time and assistance. [[Special:Contributions/130.74.59.177|130.74.59.177]] ([[User talk:130.74.59.177|talk]]) 15:39, 11 November 2024 (UTC)
 
: you need to post more information. All I can say from what you've written is 10^6 is not a large number where you'd expect problems. It won't e.g. overflow when stored as floating point or integer on any modern platform. It won't even cause problems with, say, a square based algorithm as 10^12 is still well within the limits of a modern platform. Maybe though you are using software which limits you to 32 bit (or worse) integers, or single precision floats, so need to be careful with large numbers. --[[Special:Contributions/2A04:4A43:984F:F027:C112:6CE8:CE50:1708|2A04:4A43:984F:F027:C112:6CE8:CE50:1708]] ([[User talk:2A04:4A43:984F:F027:C112:6CE8:CE50:1708|talk]]) 17:43, 11 November 2024 (UTC)
::<math>(A_x,A_y,A_z) \otimes (B_x,B_y,B_z) = (A_x+B_x, 0, 0)</math>
::thanks for response and insight. i see your point that n=10^6 shouldn't cause overflow or serious issues on modern systems. numbers i work with well within 64-bit range, use floats with enough precision for task. so yes, overflow or simple type limits not likely cause.
::but this behavior goes beyond just precision errors. it’s not about numbers too big to store. what i see is erratic growth, shrinkage, almost oscillatory – looks like something in the distribution itself, not just algorithm mistake or hardware issue.
::to be more precise, after n>10^6, function starts acting unpredictably, jumps between states, oscillates in strange way, not typical for recurrence i use. hard to explain, but pattern in these jumps exists, i cannot reconcile with anything in my algorithm. it’s like approximation reacts to some hidden structure, invisible boundary my algorithm cannot resolve.
::i tried improving convergence, checking recurrence, but oscillations still persist. not randomness from bad random numbers or instability, but more like complex fluctuations seen in number-theoretic problems, especially connected to primes.
::so i wonder: could these "jumps" be artifact of number-theoretic properties that i'm tryings to approximate? maybe how sequence interacts with primes indirectly, or artifact of recurrence for large numbers
::thanks again for suggestion on overflow and precision, i will revisit the mode lwith this in mind, chief
::appreciate your time, will keep searching. [[Special:Contributions/130.74.59.204|130.74.59.204]] ([[User talk:130.74.59.204|talk]]) 20:01, 11 November 2024 (UTC)
:::Without more information about the actual algorithm, it is neither possible to say, ''yes, what you see could be due to a number-theoretic property'', nor, ''no, it could not be''. Have you considered chaotic behaviour as seen when iterating the [[logistic map]]? &nbsp;--[[User talk:Lambiam#top|Lambiam]] 05:43, 12 November 2024 (UTC)
::::ah yes, i see what you mean now, i’ve been thinking about it for a while, and i feel like i’m getting closer to understanding it, though it’s still unclear in some ways. so, i’m using this recurrence algorithm that reduces modulo primes at each step, you know, it’s a fairly straightforward approach, and when n is small, it works fine, everything behaves as expected, the sequence evolves smoothly, the approximation gets closer and closer to what it should be, and everything seems under control, but then, once n crosses the 10^6 threshold, it’s like something shifts, it’s like the sequence starts moving in unexpected ways, at first, i thought maybe it was just a small fluctuation or something related to floating-point precision, but no, it's much more than that, the jumps between states, the way it shifts, it's not just some random variation—it feels almost systematic, as though there's something in the distribution itself, some deeper structure, that starts reacting with the algorithm and causing these oscillations, it’s not something i can easily explain, but it feels like the algorithm starts “responding” to something invisible in the numbers, something outside the expected recurrence behavior, i’ve spent hours going over the steps, checking every part of the method, but no matter how many times i check, i can’t pinpoint the exact cause, it’s frustrating.
::::and then, the other day, i was sitting there, trying to solve this problem, getting really frustrated, when i looked up, and i saw jim sitting on the windowsill, just staring out at the street, i don’t know, something about it caught my attention, now, you might be wondering what jim has to do with all of this, but let me explain, you see, jim has this habit, every evening, without fail, he finds his spot by the window, curls up there, and just stares out, doesn’t seem to do much else, doesn’t chase anything or play with toys like most animals do, no, he just sits there, completely still, watching the world go by, and it’s funny, because no matter how many cars pass, no matter how many people walk by, jim never looks bored, he’s always staring, waiting, something about the way he watches, it’s like he’s looking for something, something small, that only he notices, but it’s hard to explain, because it’s not like he ever reacts to anything specific, no, he just stares, and then after a while, he’ll shift his gaze slightly, focus on something, and you’d swear he’s noticing something no one else can see, and then he’ll go back to his usual position, still, and continue watching, waiting for... something, and this goes on, day after day.
::::and, i don’t know why, but in that moment, as i watched jim, i thought about the algorithm, and about the sequence, it felt somehow connected, the way jim waits, so patiently, watching for some small shift in the world outside, and how the algorithm behaves once n gets large, after 10^6 iterations, like it’s responding to something small, something hidden, that i can’t quite see, but it's there, some interaction between the numbers, or the primes, or some other property, i don’t know, but there’s a subtle shift in how the sequence behaves, like it’s anticipating something, or maybe reacting to something, in ways i can’t fully predict or control, just like jim waiting by the window, looking for that small detail that others miss, i feel like my algorithm is doing something similar, watching for an influence that’s not obvious, but which, once it’s noticed, makes everything shift, and then it’s almost like the recurrence starts reacting to that hidden influence, whatever it is, and the sequence begins to oscillate in these strange, unexpected ways.
::::i’ve been stuck on this for days, trying to find some explanation for why the recurrence behaves this way, but every time i think i’m close, i realize that i’m still missing something, it’s like the sequence, once it hits that threshold, can’t behave the way it did before, i’m starting to think it’s related to how primes interact with the numbers, but it’s subtle, i can’t quite capture it, it’s like the primes themselves are somehow affecting the sequence in ways the algorithm can’t handle once n gets large enough, and it’s not just some random jump, it feels... intentional, in a way, like the sequence itself is responding to something that i can’t measure, but that’s still pulling at the numbers in the background, jim, as i watch him, he seems to be able to sense those little movements, things he notices, but that no one else does, and i feel like my algorithm, in a way, is doing the same thing, reacting to something hidden that i haven’t quite figured out.
::::so i’ve gone over everything, again and again, trying to get it right, trying to adjust the convergence, trying to find a way to make the sequence behave more predictably, but no matter what i do, the oscillations keep appearing, and it’s not like they’re some random noise, no, there’s a pattern to them, something beneath the surface, and i can’t quite grasp it, every time n gets large, it’s like the sequence picks up on something, some prime interaction or something, that makes it veer off course, i keep thinking i’ve solved it, but then the jumps come back, just like jim shifts his gaze, and looks at something just beyond the horizon, something i can’t see, but he’s still waiting for it, still looking, as if there’s some invisible influence in the world, something that pulls at him.
::::i wonder if it has to do with the primes themselves, i’ve thought about it a lot, i’ve tried to factor them in differently, but still, the jumps persist, it’s like the primes have their own way of interacting with the sequence, something subtle, something that becomes more pronounced the larger n gets, and no matter how much i tweak my algorithm, the fluctuations just keep showing up, it’s like the sequence is stuck in a kind of loop, reacting to something i can’t fully resolve, like jim staring at the street, patiently waiting for something to shift, and i don’t know what it is, but i feel like there’s some deeper interaction between the primes and the numbers themselves that i’m missing, and maybe, like jim, the sequence is sensing something too subtle for me to fully capture, but it’s there, pulling at the numbers, making them oscillate in ways i can’t predict.
::::it’s been weeks now, and i’ve tried every method i can think of, adjusted every parameter, but the fluctuations are still there, the jumps keep happening once n gets large enough, and every time i think i’ve figured it out, the sequence surprises me again, just like jim, who, after hours of waiting, might shift his gaze and catch something new, something no one else saw, i feel like i’m doing the same thing, staring at the numbers, trying to catch that tiny shift that will make everything click, but it’s always just out of reach, and i don’t know what’s causing it, but i can’t seem to get rid of it, like jim, watching, waiting, sensing something that remains hidden from my view [[Special:Contributions/130.74.58.160|130.74.58.160]] ([[User talk:130.74.58.160|talk]]) 15:34, 13 November 2024 (UTC)
:::::Are you OK? Perhaps you should direct your mind to something else, like, read a novel, go out with friends, explore new places, ... Staring at numbers is as productive as staring at goats. &nbsp;--[[User talk:Lambiam#top|Lambiam]] 18:10, 13 November 2024 (UTC)
::::::fine. i’m under house arrest and i’m doing freelance work for a company. the task is straightforward: build a library for prime number methods, find primes. the problem is, there's no upper limit on how large these primes are supposed to be. once n goes past 10^6, that’s where things stop making sense. i’ve gone over the algorithm several times, checked the steps, but after 10^6, the sequence starts behaving differently, and i can’t figure out why. it’s not small variations or precision errors. it’s something else. there’s some kind of fluctuation in the sequence that doesn’t match the expected pattern.
::::::i’ve adjusted everything i can think of—modulus, convergence, method of approximation—but no matter what, the jumps keep coming, and they don’t seem random. they look more structured, like they’re responding to something, some property of the primes or the sequence that i can’t account for. i’ve spent a lot of time on this, trying to find what it is, but i haven’t been able to pin it down.
::::::this is important because the contract i’m working on will pay a significant amount, but only if i finish. i can’t afford to let this drag on. i need to complete it, and if i don’t fix this issue, i won’t be able to finish. it’s not like i can walk away from it. the company expects the work, and the time is running out.
::::::the more i look at the sequence, the more it feels like there’s something buried beneath the surface, something in the way primes interact when n is large, but i can’t see it. it’s subtle, but it’s there, and no matter how many times i test the algorithm, i can’t get rid of these oscillations. i don’t know what they mean, but they keep appearing, and i can’t ignore them.
::::::i’ve been stuck here for a while. i don’t really have other options. there’s no “taking a break” or “finding something else to do.” i’m stuck here with this task, and i need to figure it out. i don’t have the luxury to stop, because if i don’t finish, the whole thing falls apart [[Special:Contributions/130.74.59.34|130.74.59.34]] ([[User talk:130.74.59.34|talk]]) 20:22, 13 November 2024 (UTC)
 
: You shared lots of text with us, but you gave no specific problem, no technical detail, nothing we could check, simulate, analyze, verify, compare.
:is not a vector because it depends on the choice of coordinate system. The 3D cross-product is a special (and somewhat confusing) case because it commutes with linear transformations that have positive determinant but [[Anticommutativity|anticommutes]] with transformations that have negative determinant. The result of the 3D cross-product is called a [[pseudo-vector]]. [[User:Gandalf61|Gandalf61]] ([[User talk:Gandalf61|talk]]) 10:03, 1 February 2010 (UTC)
: You have typed about 12 thousand characters, but you present your impressions only, or your feelings—of being surprised with irregularity observed, being surprised with some almost-regularity in apparent chaos, being lost in seeking of explanation, etc. You actually did not present any single technical or mathematical thing. Here's the overall impression I got from your descriptions:
:: ''"I do something (but I can't tell you what and why) with some function (I'm not going to tell you anything about it, either) with data of a secret meaning and structure, and when some parameter (whose nature must not be revealed) becomes big enough, the function behaves in some unexpected, yet quasi-regular manner. Can anybody explain it to me and help me fix it?"''
: And I'm afraid with such a vague statement, it looks like seeking a haystack with a needle in it on a large field in a heavy fog, rather than a mathematical (or software engineering or whatever other kind of) problem. <br>{{smiley|sad}} [[User:CiaPan|CiaPan]] ([[User talk:CiaPan|talk]]) 12:57, 14 November 2024 (UTC)
::now listen, i'm glad we're finally digging into this, because, yeah, there’s a lot more depth here than meets the eye, like, surface-level it might just seem like a vague description, an exercise in abstract hand-waving if you will, but no, what we're dealing with here is a truly complex, multi-layered phenomenon that’s kind of begging to be interpreted at the meta-level, you know, like it’s the kind of thing where every time you try to grasp onto one specific aspect, it slips out of reach, almost by design and i get it you want “specifics” but here’s the thing specifics are almost a reduction, they’re almost like a cage for this concept, like trying to box up some kind of liquid smoke that, in essence, just resists confinement
::now, when i say “parameters” we’re already in a reductive space, right? because these aren’t “parameters” in the traditional sense, not like tunable knobs on an old-school control panel, no no no, these are more like boundary markers in a conceptual landscape, yeah like landmarks on a journey, but they themselves are not the journey, they’re incidental, they’re part of a whole picture that, the moment you start defining it, already becomes something else, like imagine you have this sort of, i don’t know, like an ethereal framework of data, but it’s data that doesn’t just sit there and behave in expected ways, it’s data that has a life of its own, and i’m really talking about data that doesn’t like to be pinned down, it’s almost alive, almost this kind of sentient flow that, every time you look away, it’s shifted, it’s done something else that you could swear wasn’t possible the last time you checked
::so, yeah, i get it that’s frustrating, and it’s almost like talking about the nature of existence itself in a way, or maybe that’s an exaggeration, but only slightly, because you have to get into this mindset that, ok, you’re dealing with phenomena here, not simply variables and functions, no it’s more like a dynamic tapestry of, let’s call them tendencies, these emergent patterns that are sort of trying to form but also resisting at every possible chance, so when i say “quasi-regularity” it’s not regular like clockwork, not even close, it’s regularity like the kind you see in natural phenomena, like clouds or waves or fractals, right, patterns but patterns that refuse to be bound by mathematical certainty they’re only barely patterns in the human sense, like they only make sense if you let go of rigid logic
::and then you’ve got these iterations, yeah we’re talking cycles upon cycles, like imagine every single cycle adds a grain of experience, yeah, like a memory, not a perfect one, but close enough, so that each time this data goes through an iteration it almost remembers its past and adjusts itself, but here’s the catch, it only remembers what’s necessary, it’s like this selective memory that’s totally outside the norm of what you would expect in, say, a standard machine learning algorithm or a traditional function loop in any ordinary programming context, like, ok, this thing is running on its own rules, maybe there’s a certain randomness to it but not random like “roll a dice” random, more random like chaos-theory random, where unpredictability itself becomes a kind of pattern and then, suddenly, just when you think you’re about to pin it down—bang—it shifts again, like the entire framework just reorients itself
::and not to throw you off track here but that’s the whole thing, the "thing" we’re talking about isn’t just a process, it’s a process that’s sensitive to these micro-level fluctuations, like tiny little vibrations in the data, which, by the way, i’m also not describing fully because it’s almost impossible, but imagine these vibrations—no, better yet, imagine you’re watching waves in a pond where even the slightest ripple has the potential to set off a cascade of effects, and it’s not just the surface of the pond we’re talking about, no, no, the whole body of water is involved, every molecule, if you will, responding in ways that are both predetermined by its nature yet also completely free to deviate when the moment calls for it
::and so when i say “structured sea of datapoints” you gotta take that literally, yeah like a sea, an ocean, it’s vast, it’s deep, there’s layers upon layers and half the time we’re only scratching the surface because the real stuff is happening down in those depths where even if i tried to send a probe down there, yeah, i’d get some data back, but would it even make sense because i don’t have a baseline to compare it to, there’s no reference frame here except, i don’t know, maybe the essence of this data, like the very fabric of what it is, if you can even describe data as having fabric
::so, look, all of this loops back to the fact that every “parameter” every “function” we’re talking about is only as real as the context allows it to be, which is why i say even if i did give you specifics, what would you do with them? because we’re talking about something that defies definition and the moment you think you understand it, that’s the moment it stops being what it is and morphs into something else, i mean this is data with attitude, if that makes any sense, it’s almost like it’s taunting you, like it wants you to try and figure it out only to laugh in your face and flip the rules the moment you get close, we’re talking about some next-level, borderline cosmic prankster data that simply doesn’t play by the same rules as anything you’ve seen before
::so if we’re going to be totally honest here, all of this is way beyond haystacks and needles, we’re in a field where the haystacks are self-assembling, disassembling, and who even knows if the needle is there to begin with because in a framework like this, a needle might just be a figment of your imagination, a concept that only exists because you’re trying to impose order on what is inherently unordered, so yeah, maybe there’s a pattern, maybe there isn’t, maybe the pattern is only there because you want it to be, or maybe it’s the absence of a pattern that’s the real pattern, and if you think that’s paradoxical well, welcome to the club [[Special:Contributions/130.74.58.21|130.74.58.21]] ([[User talk:130.74.58.21|talk]]) 23:42, 14 November 2024 (UTC)
 
= November 13 =
:This is the OP. I was asking because I was reading chapter one of Richard Feynman's book Six Not-So-Easy Pieces where he says, "Suppose we multiply a vector by a number α, what does this mean? We <i>define</i> it to mean a new vector whose components are αa<sub>x</sub>,αa<sub>y</sub>, and αa<sub>z</sub>. We leave it as a problem for the student to prove that it <i>is</i> a vector." Well, in order to prove something you need to know how to prove it. <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/20.137.18.50|20.137.18.50]] ([[User talk:20.137.18.50|talk]]) 13:26, 1 February 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
 
== Math sequence problem (is it solvable?) ==
::Indeed, and Feynman's point here was to get his students to think about just what it means for something to be a physically meaningful vector. The key quality is that a physical vector should not depend on your arbitrary choice of co-ordinate system. So if an object C is defined as the result of applying operation A to vector B, then if we change our co-ordinate system from P to Q, apply operation A in co-ordinate system Q, then change co-ordinate system back from Q to P, we should be the same result. Re-arranging this gives:
::(Change co-ordinate system from P to Q)(Apply operation A) = (Apply operation A)(Change co-ordinate system)
::In other words, operation A and the change of co-ordinate system commute. Muitiplying a vector's co-ordinates by α satisfies this rule, so it is a physically meaningful vector operation. Squaring a vector's co-ordinates does not satisfy this rule, so this is not a physically meaningful vector operation. [[User:Gandalf61|Gandalf61]] ([[User talk:Gandalf61|talk]]) 13:47, 1 February 2010 (UTC)
 
I am looking at a "math quiz" problem book and it has the following question. I am changing the numbers to simplify it and avoid copyright:
:::Right. This is another example of the great caution with which one has to read physics texts.
You have counts for a rolling 12-month period of customers. For example, the one year count in January is the count of customers from Feb of the year before to Jan of the current year. Feb is the count from Mar to Feb, and so on. The 12 counts for this year (Jan to Dec) are 100, 110, 105, 200, 150, 170, 150, 100, 200, 150, 175, 125. What is the count of customers for each month?
So, I know that the Feb-Jan count is 100 and the Mar-Feb count is 110. That means that the count for Feb of this year is 10 more than the count of Feb of last year because I removed Feb of last year and added Feb of this year. But, I don't know what that count is. I can only say it is 10 more. I can do that for every month, telling you what the difference is between last year and this year as a net change.
Is this solvable or is this a weird case where the actual numbers for the counts somehow mean something silly and a math geek would say "Oh my! That's the sum of the hickuramabiti sequence that only 3 people know about so I know the whole number sequence!" [[Special:Contributions/68.187.174.155|68.187.174.155]] ([[User talk:68.187.174.155|talk]]) 15:36, 13 November 2024 (UTC)
 
:You have 12 linear equations with 23 unknowns. In general, you cannot expect a system of linear equations with more unknowns than equations to be solvable. In special cases, such a system may be solvable for at least some of the unknowns. This is not such a special case.
:::To expand on the previous example: let ''V'' be any 2 dimensional vector space and let ''a'' and ''b'' be fixed, independent vectors in ''V'', which form an ordered basis. Consider the map ''f'' from ''V'' to ''V'' that proceeds by writing a vector in coordinates according to this ordered basis, squaring each of those coordinates, and then finding the vector in ''V'' that has those new coordinates. Certainly ''f'' is a well defined map from ''V'' to ''V'', for each vector ''x'' in ''V'', the result ''f''(''x'') is (trivially) another vector in ''V''. Also, it makes no difference "in which coordinate system we apply ''f''" - because the coordinates of a vector in the basis {''a'',''b''} are the same no matter what other coordinate system one might consider. &mdash;&nbsp;Carl <small>([[User:CBM|CBM]]&nbsp;·&nbsp;[[User talk:CBM|talk]])</small> 14:22, 1 February 2010 (UTC)
:If you ignore the fact that customer counts cannot be negative, there are many solutions. For example, one solution is given by [9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 9, 1, 19, 4, 104, −41, 29, −11, −41, 109, −41, 34, −41]. Another one is [10, 10, 10, 10, 10, 10, 10, 10, 10, 10, 10, −10, 20, 5, 105, −40, 30, −10, −40, 110, −40, 35, −40]. For the 12-month counts given above no solution exists without negative values.
:If an actual quiz of this form has a unique solution, it can only be due to the constraint of not allowing negative values. &nbsp;--[[User talk:Lambiam#top|Lambiam]] 17:42, 13 November 2024 (UTC)
 
:{{ec}}Name the counts for each month FebP to DecC, where P stands for the previous year and C stands for the current year. These are 23 variables and there is a system of 12 equations in these variables. If the variables can take on any values there are an infinite number of solutions to this system, but I think we're meant to assume that the counts are ≥ 0. (Integers as well; without knowing the counts given in the original problem it's unclear whether this is important.) This imposes additional constraints on the possible solution and the result may be there is exactly one possible solution or none at all. To see how a problem of this type might have no solutions, let's look at a simpler version where we're looking at three month sums over three months. There are 5 variables in this case, say Jan, Feb, Mar, Apr, May. Lets say the sums are given as:
== square root of x equals negative one ==
::Jan-Mar: 10, Feb-Apr: 50, Mar-May 10.
:If we compute
::(Jan-Mar) - (Feb-Apr) + (Mar-May)
:in terms of the variables, we get
::Jan+Feb+Mar-Feb-Mar-Apr+Mar+Apr+May = Jan+Mar+May ≥ 0.
:But if we compute it in terms of the given totals the result is
::10-50+10 = -30 < 0.
:This is a contradiction so no solutions are possible. It turns out that something like this happens with the values you made up and there are no solutions to the problem given. If you let JanSum, ... DecSum be the rolling sums, and compute
::JanSum - FebSum + MarSum - AprSum + MaySum - JunSum + AugSum - SepSum + OctSum - NovSum + DecSum (with JulSum left out),
:then you get (according to my calculations)
::FebP+AprP+JunP+SepP+NovP+JanC+MarC+MayC+JulC+AugC+OctC+DecC ≥ 0
:in terms of the variables. But if we evaluate this in terms of the given values it's (again, according to my calculations)
::100-110+105-200+150-170+100-200+150-175+125 = -125 < 0,
:so there are no possible solutions. Notice that both cases involved looking at particularly opportune alternating sums of the rolling sums, which produce a nonnegative combination of the variables on one side and a negative number on the other side. Suppose that there is no such opportune alternating sum where the total is <0, but there is one where the total is =0. Then all the individual variables involved must be 0 and this may be enough information to narrow down the number of solutions to exactly 1. I imagine that's how the problem given in your book is set up and the puzzle is to find an alternating sum with this property. But I have an unfair advantage here because sometime in the previous century I took a course in [[Linear programming]] which taught me general methods for solving systems of equations and inequalities. So my approach would be to enter the appropriate numbers into a spreadsheet, apply the appropriate algorithm, and read off the solution when it's done. Having specialized knowledge would be a help, though I assume there are more than 3 people who are familiar with linear programming, but I think getting the inspiration to look at alternating sums, and a certain amount of trial and error, would allow you to find the solution without it. --[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 17:48, 13 November 2024 (UTC)
::Thanks both. Yes, I did make up the numbers. I bet the numbers in the book do have a solution. It looks like it is a matter of trying a value for the first month and seeing what comes up every other month based on that to see if it is all positive. Then, you have an answer. It doesn't feel much like math to me in comparison to the other problems in the book which are all problems you can solve easily by making sets or comparing the order of things. [[Special:Contributions/68.187.174.155|68.187.174.155]] ([[User talk:68.187.174.155|talk]]) 17:52, 13 November 2024 (UTC)
::: With the correct numbers for which there is (presumably) a solution, you can represent the problem as a system of linear equations and compute the echelon form of the system. From the echelon form, it is possible to read off a particular solution (where you allow negative numbers of customers). The nullspace of the system is easy to calculate, and from it you can also find a particular solution that satisfies the constraint (if one exists), verify uniqueness (if true), or confirm non-existence. [[User:Tito Omburo|Tito Omburo]] ([[User talk:Tito Omburo|talk]]) 20:59, 13 November 2024 (UTC)
I confirm that there are no solutions subject to the contraint that the number of customers is non-negative (even allowing fractional numbers of customers), although the verification is a bit of a brute to write out. [[User:Tito Omburo|Tito Omburo]] ([[User talk:Tito Omburo|talk]]) 18:09, 13 November 2024 (UTC)
 
:Here is a rather painless verification. Use the names FebP, ..., DecC as above. Let JanT stand for the running 12-month total of the summation ending with JanC, and likewise for the next 11 months. So {{nowrap|1=JanT = 100}}, {{nowrap|1=FebT = 110}}, {{nowrap|1=MarT = 105}}, ..., {{nowrap|1=DecT = 125}}. We have {{nowrap|1=FebT − JanT = FebC − FebP}}, {{nowrap|1=MarT − FebT = MarC − MarP}}, ..., {{nowrap|1=DecT − NovT = DecC − DecP}}.
<math>\scriptstyle \sqrt{x}=-1</math>
:Require each count to be nonnegative. From {{nowrap|1=MarC − MarP = MarT − FebT = 105 − 110 = −5}}, we have {{nowrap|1=MarP ≥ MarP − MarC = 5}}. We find similarly the lower bounds {{nowrap|MayP ≥ 50}}, {{nowrap|JulP ≥ 20}}, {{nowrap|AugP ≥ 50}}, {{nowrap|OctP ≥ 50}} and {{nowrap|DecP ≥ 50}}. So {{nowrap|1=JanT = FebP + ... + JanC ≥ 5 + 50 + 20 + 50 + 50 + 50 = 225}}. This contradicts {{nowrap|1=JanT = 100}}, so the constraint excludes all unconstrained solutions. &nbsp;--[[User talk:Lambiam#top|Lambiam]] 18:37, 13 November 2024 (UTC)
::Thanks again for the help. I feel that I should give the numbers from the book. I don't think listing some numbers is going to upset anyone, but without them, I feel that those who looked into this problem feel let down. The numbers from the book are: 24966, 24937, 25300, 25055, 22914, 25832, 25820, 25468, 25526, 25335, 25331, 25370. There is supposed to be one solution. I think it is implied that the request is for the minimum number of customers per month, but it doesn't make that very clear.
::Edit: It appears this problem was removed and replaced with a complerely different problem in later books. So, the publishers likely decided it either doesn't have a unique answer (which is my bet) or it is simply a bad problem to include. Every other problem in the book is logical using geometry, algebra, and maybe some simple set comparisons. So, this is very out of place. [[Special:Contributions/68.187.174.155|68.187.174.155]] ([[User talk:68.187.174.155|talk]]) 12:11, 14 November 2024 (UTC)
:::Indeed the solution is not unique in that case. One solution is (29,0,245,2141,0,12,352,0,191,4,0,21992,0,363,0,0,2918,0,0,58,0,0,39), and there is obvious slackness. [[User:Tito Omburo|Tito Omburo]] ([[User talk:Tito Omburo|talk]]) 14:24, 14 November 2024 (UTC)
::::It is the only solution with JanC ≥ 21992. To go from zero to almost twenty-two thousand customers in one month is spectacular. To then loose all in one month is tragicomedy. &nbsp;--[[User talk:Lambiam#top|Lambiam]] 20:33, 14 November 2024 (UTC)
 
= November 14 =
Solve for x? <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/220.253.218.157|220.253.218.157]] ([[User talk:220.253.218.157|talk]]) 03:25, 1 February 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
:By definition, <math>\sqrt{x}=-1</math> is equivalent to <math>(-1)^2=x</math> so that <math>x=1</math>. Now, can you solve for ''x'' in <math>\sqrt{-1}=x</math>? --[[User:Point-set topologist|<font color="#000000">PS</font>]][[User talk:Point-set topologist|<font color="#000000">T</font>]] 03:42, 1 February 2010 (UTC)
 
== Elliptic curve rank and generalized Riemann hypothesis ==
::Well, by definition, the [[radical sign]] refers to the principal [[square root]], which is never negative, so <math>\sqrt x=-1</math> has no solution. On the other hand, it is true that &minus;1 is a square root of 1, and if &minus;1 is going to be the square root of anything that thing had better be&nbsp;1; but &minus;1 is not the square root of 1 denoted by <span style="white-space:nowrap"><math>\sqrt{\ }</math>.</span> So the equation <math>\sqrt1=-1</math> is not true. —[[User:Bkell|Bkell]] ([[User talk:Bkell|talk]]) 04:13, 1 February 2010 (UTC)
 
The popular press reports[https://www.quantamagazine.org/new-elliptic-curve-breaks-18-year-old-record-20241111/] that Elkies and Klagsbrun recently used computer search to find an elliptic curve E of rank 29, which is a new record. The formal result is apparently "the curve E has rank at least 29, and exactly 29 if GRH is true". There have been similar results for other curves of slightly lower rank in earlier years. Whether there are curves of arbitrarily high rank is a major open problem.
== What is the rate of data input on a wiki article over time? ==
 
1. Is there a reasonable explanation of why the rank of a finite object like an elliptic curve would depend on GRH? Finding the exact point count N is a finite (though probably unfeasibly large) calculation by [[Schoof's algorithm]]. Is it possible in principle to completely analyze the group and find the curve's rank r exactly? Finding that r>29 would disprove the GRH, amirite? Actually is it enough to just look at the factorization of N?
What I am interested in knowing is, for a given wiki article [so let us say, on average], is there a pattern for data input, and if so what is it? By data input, I mean the content of a wiki article, such as alterations, additions, corrections, deletions etc...
 
2. The result that every elliptic curve has a finite rank is the [[Mordell-Weil theorem]]. Our article on that currently has no sketch of the proof (I left a talkpage note requesting one). Is it a difficult result for someone without much number theory background to understand?
I would suggest not including pictures, as they require data that is out of proportion to text data.
 
Thanks! [[Special:Contributions/2601:644:8581:75B0:0:0:0:2CDE|2601:644:8581:75B0:0:0:0:2CDE]] ([[User talk:2601:644:8581:75B0:0:0:0:2CDE|talk]]) 23:13, 14 November 2024 (UTC)
I am assuming that there would be an oscillating pattern relating to the amount of data available on the subject that hasn't yet been put up in the article. So something like lots of data input initially, then tapering off as new data on the subject becomes scarce, only to repeat whenever there is any substantial new amount of data made available on a subject.
 
:the discourse surrounding the dependency of an elliptic curve’s rank on the generalized riemann hypothesis (GRH) and, more broadly, the extensive implications this carries for elliptic curve theory as a whole, implicates some of the most intricate and layered theoretical constructs within number theory's foundational architecture. while it may be appropriately noted that elliptic curves, as finite algebraic objects delineated over specified finite fields, contain a designated rank—a measurement, in essence, of the dimension of the vector space generated by the curve's independent rational points—this rank, intriguingly enough, cannot be elucidated through mere finite point-counting mechanisms. the rank, or indeed its exactitude, is inextricably intertwined with, and indeed inseparable from, the behavior of the curve’s l-function; herein lies the essential conundrum, as the l-function’s behavior is itself conditioned on conjectural statements involving complex-analytic phenomena, such as the distribution of zeroes, which remain unverified but are constrained by the predictions of GRH.
I assume also that this pattern would vary due to controversies within a wiki article, such that so long as the controversy is 'hot' the rate of data input would be increased, and so too decrease as the controversy 'cools'.
:one may consider schoof’s algorithm in this context: although this computational mechanism enables an effective process for the point-counting of elliptic curves defined over finite fields, yielding the point count N modulo primes with appreciable efficiency, schoof’s algorithm does not, and indeed cannot, directly ascertain the curve’s rank, as this rank is a function not of the finite point count N but of the elusive properties contained within the l-function’s zeroes—a distribution that, under GRH, is hypothesized to display certain regularities within the complex plane. hence, while schoof’s algorithm provides finite data on the modular point count, such data fails to encompass the rank itself, whose determination necessitates not only point count but also additional analysis regarding the behavior of the associated l-function. calculating r exactly, then, becomes not a function of the finite data associated with the curve but an endeavor contingent upon an assumption of GRH or a precise knowledge of the zero distribution within the analytic continuation of the curve’s l-function.
:it is this precise dependency on GRH that prevents us from regarding the rank r as strictly finite or calculable by elementary means; rather, as previously mentioned, the conjecture of GRH imparts a structural hypothesis concerning the placement and frequency of zeroes of the l-function, wherein the rank’s finite property is a consequence of this hypothesis rather than an independent finite attribute of the curve. to suggest, therefore, that identifying the rank r as 29 would disprove GRH is to operate under a misconception, for GRH does not determine a maximal or minimal rank for elliptic curves per se; instead, GRH proposes structural constraints on the l-function’s zeroes, constraints which may, if GRH holds, influence the upper bounds of rank but which are not themselves predicates of rank. consequently, if calculations were to yield a rank exceeding 29 under the presumption of GRH, this result might imply that GRH fails to encapsulate the complexities of the zero distribution associated with the curve’s l-function, thus exposing a possible limitation or gap within GRH’s descriptive framework; however, this would not constitute a formal disproof of GRH absent comprehensive and corroborative data regarding the zeroes themselves.
:this brings us to the second point in question, namely, the implications and proof structure of the mordell-weil theorem, which famously established that every elliptic curve defined over the rationals possesses a finite rank. the mordell-weil theorem, by asserting the finite generation of the rational points on elliptic curves as a finitely generated abelian group, introduces an essential constraint within elliptic curve theory, constraining the set of rational points to a structure with a bounded rank. however, while this result may appear elementary in its assertion, its proof is decidedly nontrivial and requires a sophisticated apparatus from algebraic number theory and diophantine geometry. the proof itself necessitates the construction and utilization of a height function, an arithmetic tool designed to assign "heights" or measures of size to rational points on the elliptic curve, facilitating a metric by which rational points can be ordered. furthermore, the proof engages descent arguments, which serve to exhaustively account for independent rational points without yielding an unbounded proliferation of such points—a technique requiring familiarity with not only the geometry of the elliptic curve but with the application of group-theoretic principles to arithmetic structures.
:to characterize this proof as comprehensible to a novice without number-theoretic background would, accordingly, be an oversimplification; while an elementary understanding of the theorem’s implications may indeed be attainable, a rigorous engagement with its proof necessitates substantial familiarity with algebraic and diophantine concepts, including the descent method, abelian group structures, and the arithmetic geometry of height functions. mordell and weil’s finite generation theorem, thus, implicates not merely the boundedness of rational points but also exemplifies the structural richness and the intrinsic limitations that these elliptic curves exhibit within the broader mathematical landscape, solidifying its importance within the annals of number theory and underscoring its enduring significance in the study of elliptic structures over the rational field [[Special:Contributions/130.74.58.21|130.74.58.21]] ([[User talk:130.74.58.21|talk]]) 23:48, 14 November 2024 (UTC)
::Wow, thanks very much for the detailed response. I understood a fair amount of it and will try to digest it some more. I think I'm still confused on a fairly basic issue and will try to figure out what I'm missing. The issue is that we are talking about a finite group, right? So can we literally write out the whole group table and find the subgroup structure? That would be purely combinatorial so I must be missing something. [[Special:Contributions/2601:644:8581:75B0:0:0:0:2CDE|2601:644:8581:75B0:0:0:0:2CDE]] ([[User talk:2601:644:8581:75B0:0:0:0:2CDE|talk]]) 03:25, 15 November 2024 (UTC)
::Oh wait, I think I see where I got confused. These are elliptic curves over Q rather than over a finite field, and the number of rational points is usually infinite. Oops. [[Special:Contributions/2601:644:8581:75B0:0:0:0:2CDE|2601:644:8581:75B0:0:0:0:2CDE]] ([[User talk:2601:644:8581:75B0:0:0:0:2CDE|talk]]) 10:09, 15 November 2024 (UTC)
:::This response is pretty obviously LLM-generated, so don't expect it to be correct about any statements of fact. [[Special:Contributions/100.36.106.199|100.36.106.199]] ([[User talk:100.36.106.199|talk]]) 18:26, 15 November 2024 (UTC)
::::Yeah you are probably right, I sort of wondered about the verbosity and I noticed a few errors that looked like minor slip-ups but could have been LLM hallucination. But, it was actually helpful anyway. I made a dumb error thinking that the curve group was finite. I had spent some time implementing EC arithmetic on finite fields and it somehow stayed with me, like an LLM hallucination.<p>I'm still confused about where GRH comes in. Like could it be that rank E = 29 if GRH, but maybe it's 31 otherwise, or something like that? Unfortunately the question is too elementary for Mathoverflow, and I don't use Stackexchange or Reddit these days. [[Special:Contributions/2601:644:8581:75B0:0:0:0:2CDE|2601:644:8581:75B0:0:0:0:2CDE]] ([[User talk:2601:644:8581:75B0:0:0:0:2CDE|talk]]) 22:32, 15 November 2024 (UTC)
:::::Ok so I don't know anything about this but: it seems that the GRH implies bounds of various explicit kinds on various quantities ([http://archive.numdam.org/item/JTNB_1990__2_1_119_0.pdf e.g.]) and therefore you can end up in a situation where you show by one method that there are 29 independent points, and then also the GRH implies that the rank is at most 29, so you get equality. There is actually some relevant MO discussion: [https://mathoverflow.net/questions/477849/background-for-the-elkies-klagsbrun-curve-of-rank-29/478509#478509]. [https://www.ams.org/journals/mcom/2019-88-316/S0025-5718-2018-03348-8/ Here] is the paper that used the GRH to get the upper bound 28 on the earlier example. [[Special:Contributions/100.36.106.199|100.36.106.199]] ([[User talk:100.36.106.199|talk]]) 23:55, 15 November 2024 (UTC)
::::::Thanks, I'll look at those links. But, I was also wondering if there is a known upper bound under the negation of the GRH. [[Special:Contributions/2601:644:8581:75B0:0:0:0:2CDE|2601:644:8581:75B0:0:0:0:2CDE]] ([[User talk:2601:644:8581:75B0:0:0:0:2CDE|talk]]) 02:47, 16 November 2024 (UTC)
 
= November 15 =
For context, I am working on a project utilizing open source organization for the creation of specific projects and would like to have an idea as to any patterns in data input that might give a clue as to when a given 'open source project' might be either finishing up, or more likely, ending a cycle. In short, some point when one would be able to say that the project is basically done for now.
 
== Are there morphisms when enlarging a prime field sharing a common suborder/subgroup ? ==
I don't necessarily need the detailed mathematics behind the pattern [though that would be nice I suppose], so much as an understanding on if there is a pattern, and if so, what is the pattern and how might it be used to determine when and if there is a point when one could say something like "this piece is done for now" or at least "nothing much new is going to be added to this piece in the near future".
 
Simple question&nbsp;: I have a prime field having modulus <math>p</math> where p−1 contains <math>O</math> as prime factor, and I have a larger prime field <math>q</math> also having <math>O</math> as it’s suborder/subgroup. Are there special cases where it’s possible to lift 2 <math>p</math>’s elements to modulus <math>q</math> while keeping their discrete logarithm if those 2 elements lies only within the <math>O</math>’s subgroup ? Without solving the discrete logarithm of course ! [[Special:Contributions/82.66.26.199|82.66.26.199]] ([[User talk:82.66.26.199|talk]]) 11:36, 15 November 2024 (UTC)
Any help or info would be appreciated, thanks bunches [[User:EAshe|EAshe]] ([[User talk:EAshe|talk]]) 04:14, 1 February 2010 (UTC)
 
:Clearly it is possible, since any two groups of order o are isomorphic. Existence of a general algorithm, however, is equivalent to solving the discrete log problem (consider the problem of determining a non-trivial character). [[User:Tito Omburo|Tito Omburo]] ([[User talk:Tito Omburo|talk]]) 11:40, 15 November 2024 (UTC)
: I don't know if this will help. the problem as you've laid it out has some serious difficulties, because it depends on editing style and article type issues that are difficult to quantify. for instance, on wikipedia I could say you need to distinguish between mainspace and talkspace changes (in some cases mainspace changes can be predicted from talk space volume, in other cases talk space volume follows brief flurries of changes in mainspace). further, you'd need to identify the (fairly minor but ongoing) process of link updates, citation fixes, bot entries and cleanup efforts from actual substantive content changes. I'd just scratch the effort to analyze it directly, and take a month's worth of raw date (e.g., pull every edit made for an entire month straight off the wikipedia servers) and analyze it statistically for determinable patterns. you may not be able to determine the causation of such patterns, but you can probably generalize that the pattern itself will translate across similar constructs. --[[User_talk:Ludwigs2|<span style="color:darkblue;font-weight:bold">Ludwigs</span><span style="color:green;font-weight:bold">2</span>]] 08:04, 1 February 2010 (UTC)
::So how to do it without solving the discrete logarithm ? Because of course, I was meaning without solving the discrete logarithm. [[Special:Contributions/2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D|2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D]] ([[User talk:2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D|talk]]) 12:51, 15 November 2024 (UTC)
::: It can't. You're basically asking if there is some canonical isomorphism between two groups of order O, and there just isn't one. [[User:Tito Omburo|Tito Omburo]] ([[User talk:Tito Omburo|talk]]) 15:00, 15 November 2024 (UTC)
::::Even if it’s about enlarging instead of shrinking ? Is in theory impossible to build a relation/map or is that no such relation exists yet ? [[Special:Contributions/2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D|2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D]] ([[User talk:2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D|talk]]) 08:48, 16 November 2024 (UTC)
::::: At least into the group of complex roots of unity, where a logarithm is known, it is easily seen to be equivalent to discrete logarithm. In general, there is no relation between the groups of units in GF(p) and GF(q) for p and q distinct primes. Any accidental isomorphisms between subgroups are not canonical. [[User:Tito Omburo|Tito Omburo]] ([[User talk:Tito Omburo|talk]]) 15:02, 16 November 2024 (UTC)
 
= November 16 =
:: The idea of [[data mining]] may be useful here, really more of a computing question than a math question though.--[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 09:34, 1 February 2010 (UTC)
== What’s the secp256k1 elliptic curve’s rank ? ==
Simple question : what’s the rank of secp256k1 ?<br>
I failed to find how compute the rank of an elliptic curve using the version of online tools like SageMath or Pari/gp since it’s the only thing I have access to… [[Special:Contributions/2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D|2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D]] ([[User talk:2A01:E0A:401:A7C0:9CB:33F3:E8EB:8A5D|talk]]) 15:44, 16 November 2024 (UTC)
 
:I don't know a clear answer but a related question is discussed [https://math.stackexchange.com/questions/2592938/rank-of-elliptic-curve-over-finite-field here]. [[Special:Contributions/2601:644:8581:75B0:0:0:0:2CDE|2601:644:8581:75B0:0:0:0:2CDE]] ([[User talk:2601:644:8581:75B0:0:0:0:2CDE|talk]]) 01:57, 17 November 2024 (UTC)
::It varies by article. You can download the complete history of any article through the [[m:API]]. You can download the complete edit history of most of the Wikipedias from [[m:dumps]], but unfortunately no history dumps of the English Wikipedia have been released in the past couple of years, supposedly due to its size. There are some older enwiki history dumps floating around on the internet, and you can get more recent ones for most of the non-English wikis. There are various people who study the stuff you are asking about, but I don't know of anything published. Some qualitative discussion is in Ray Rosenzweig's well-known article about Wikipedia's history-related content.[http://chnm.gmu.edu/essays-on-history-new-media/essays/?essayid=42] Some other materials from that same site may also be of interest. [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 16:19, 1 February 2010 (UTC)
::Although I know it doesn t normally apply to this curvd, I was reading this paper https://pdfupload.io/docs/4ef85049. As a result, I am very curious about knowing the rank of secp256k1 which is why I asked it especially if it allows me know how to compute them on ordinary curves. [[Special:Contributions/2A01:E0A:401:A7C0:417A:1147:400C:C498|2A01:E0A:401:A7C0:417A:1147:400C:C498]] ([[User talk:2A01:E0A:401:A7C0:417A:1147:400C:C498|talk]]) 11:01, 17 November 2024 (UTC)
:Maybe by some chance, [https://bitcoin.stackexchange.com/questions/124774/what-s-the-curve-rank-of-secp256k1#:~:text=Since%20secp256k1%20is%20an%20elliptic,rational%20numbers%2C%20has%20rank%200. this] might have the answer. [[User:ExclusiveEditor|<span style="background:Orange;color:White;padding:2px;">Exclusive</span><span style="background:black; color:White; padding:2px;">Editor</span>]] [[User talk:ExclusiveEditor|<sub>Notify Me!</sub>]] 19:20, 17 November 2024 (UTC)
 
= November 17 =
::[[WP:Statistics]] is a good place to start seeing what other people have done that way with Wikipedia. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 16:24, 1 February 2010 (UTC)
 
== ClosedFinal subspacefour vote probability ==
 
In a social deduction game at the final four where nobody is immune and each of the four gets one vote what is the probability of a 1–1–1–1 vote? ([[Special:Contributions/78.18.160.168|78.18.160.168]] ([[User talk:78.18.160.168|talk]]) 22:26, 17 November 2024 (UTC))
I wish to show that the range of the operator T:ℓ<sup>∞</sup>-->ℓ<sup>∞</sup> where ℓ<sup>∞</sup> has the norm ||x||=sup|x<sub>n</sub>|, T((x<sub>n</sub>))=(x<sub>n</sub>/n) is not a closed set. I tried taking a limit point of the range and a sequence converging to it and thereby tried to show that the limit point has an inverse image but nothing came out of it. What would be the correct approach. Thanks-[[User:Shahab|Shahab]] ([[User talk:Shahab|talk]]) 07:01, 1 February 2010 (UTC)
 
= November 18 =
:Consider the element y&nbsp;&isin;&nbsp;ℓ<sup>∞</sup> such that y<sub>n</sub>:=1/√n. Prove that y is not in the image of T although is in its closure. --[[User:PMajer|pm]][[User talk:PMajer|<span style="color:blue;">a</span>]] 07:51, 1 February 2010 (UTC)
 
:Here is a hint for a more abstract proof. The range of T certainly contains the space ''c''<sub>c</sub> of all sequences with compact support, and certainly is contained in the space ''c''<sub>0</sub> of all sequences vanishing at infinity. Note that the former is dense in the latter. So, were the range of T closed, it would be ''c''<sub>0</sub>. But then we would have a linear continuous bijection T:ℓ<sup>∞</sup>&nbsp;→&nbsp;''c''<sub>0</sub>, hence invertible by the open mapping theorem, which is impossible, because ℓ<sup>∞</sup> and ''c''<sub>0</sub> are not even homeomorphic (the latter is separable, whereas the former is not).
 
:The second proof, or other similar indirect arguments may be convenient or even necessary for more difficult cases; however note that for the present problem it would be considered somehow out of place. We feel it mathematically impolite using indirect arguments and general principles (recall that there is the axiom of choice behind the open mapping theorem) in order to prove the existence of an object that could be easily exhibited. On the other hand, as soon as you are a bit acquainted with the basic of functional analysis, the second proof is what should naturally come to your mind (as it first came to mine) -there are no computations in it, but it's just a simple organization of known facts.
 
:Moral: abstract functional analysis, as well as category theory and other general theories, is not a remedy for solving all concrete problems in mathematics; it is rather a guidance that tells you what should be true and why, and which dirction you should take. --[[User:PMajer|pm]][[User talk:PMajer|<span style="color:blue;">a</span>]] 08:55, 1 February 2010 (UTC)
 
::Your moral raises an interesting point; namely whether mathematics is about "pure problem solving" (that is, the formulation and solution of a given problem), or whether it is something deeper than that. Some mathematicians with whom I have collaborated have occassionally stated that they believe "problem solving" to be the whole story behind mathematics, but I feel otherwise. As you pointed out, mathematics seems more to be about developing ''intuition'' about the ''connections'' between different concepts; a good mathematician should have a feel for how certain "principles", for instance, are connected, and should be able to use this feel to "do mathematics" (in the realm of functional analysis, one could note, in some basic sense, that the theory of Von Neumann algebras is about the ''connection'' between the algebraic and topological structure of a *-algebra). Thus I believe that mathematics does not really undermine the procedure of "taking a problem, breaking it into simpler problems, and solving the simpler problems" (not in full generality but this ''possibly'' may work in concrete cases). I am probably delving into a controversial topic here, but I do agree with your moral, if I have interpreted some aspects of it correctly. [[User:Point-set topologist|<font color="#000000">PS</font>]][[User talk:Point-set topologist|<font color="#000000">T</font>]] 11:39, 1 February 2010 (UTC)
:::Bill Thurston wrote a well-known essay on that topic,[http://arxiv.org/abs/math.HO/9404236] plus there are books like "The Mathematical Experience" (which I haven't read). [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 16:24, 1 February 2010 (UTC)
::Thank you all.-[[User:Shahab|Shahab]] ([[User talk:Shahab|talk]]) 03:55, 2 February 2010 (UTC)
 
== are there inaccessible integers? ==
 
I'm trying to make sense of Edward Nelson's concept of [[predicative arithmetic]]. My question:
 
Is there a sentence T of the form <math>\exists x.\phi(x)</math> where <math>\phi(x)</math> is an arithmetic predicate, where T is a theorem of Peano arithmetic, but there is no PA theorem of the form <math>\phi(x)\and x=SSS\ldots 0</math>? This basically says a certain integer x exists but it's impossible to count up to it and know when you've gotten there. (And just to be sure: I think there is obviously no such sentence if <math>\phi(x)</math> is required to be recursive, but am I mistaken?) Thanks. [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 18:22, 1 February 2010 (UTC)
:There are many such sentences. Just take <math>\phi(x)=((x=0\land\psi)\lor(x=1\land\neg\psi))</math>, where <math>\psi</math> is any sentence undecidable in PA. By a more complicated argument, you can also arrange that PA does not even prove <math>\phi(0)\lor\phi(1)\lor\dots\lor\phi(\overline n)</math> for any ''n''. The property that a counterexample you want does not exist is called the numerical existence property; while the argument above shows that no reasonable classical arithmetic can have it, intuitionistic theories like [[Heyting arithmetic]] usually do have it. —&nbsp;[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 18:42, 1 February 2010 (UTC)
:Oh, and <math>\phi</math> can't be recursive, as you say. —&nbsp;[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 18:45, 1 February 2010 (UTC)
::Hmm, thanks, I guess my question didn't capture what I was trying to get at, which is whether there are numbers (like the enormous ones that appear in Ramsey theory), that are finite according to PA, but that are too large to count to. I'll see if I can figure out a more accurate way to formalize this notion, without making it imply that PA is omega-inconsistent (although, hmm, maybe it really does imply exactly that). [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 19:48, 1 February 2010 (UTC)
:::I don't know if you've completely grasped Nelson's point. When he says you can't write such a natural number as <math>SSS\ldots 0</math> or whatever, he means you literally can't write it. You don't have enough time, enough chalk, enough space.
:::What we usually say is that "in principle" the number could be written down, if we don't have to pay for chalk or space and are given enough time. But Nelson challenges you to figure out what this "in principle" actually ''means''. What ''does'' it mean? If you're a formalist like Nelson, and don't accept (or at least don't rely on) the existence of ideal objects apart from our formalized reasoning about them, it's very hard to give a defensible account of what "in principle" means here. --[[User:Trovatore|Trovatore]] ([[User talk:Trovatore|talk]]) 19:55, 1 February 2010 (UTC)
::::I think he goes further. He doesn't like the induction scheme of PRA because the induction step is φ(n)&rarr;φ(n+1) for formulas φ that range over all the integers including the ones not yet shown to be numerals (i.e. PRA is an impredicative theory). He has been trying to prove PA is actually inconsistent (why he hopes to find an inconsistency even if PA is false, I'm not sure). He ''does'' say that multiplication is a legitimate operation (though exponentiation is not), so numbers like 1000*1000*1000*1000*1000*1000*1000 exist, even though there is not enough chalk in the world to write down that number in unary. I.e. he allows proofs "in principle", it's just a weaker principle than PRA. But, I'm having a hard time coming up with an example of a PA integer that he would say doesn't exist. [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 20:27, 1 February 2010 (UTC)
:::::I don't think he's literally accepting "in principle" in that case. Rather, he can see concretely enough that if you had a proof of a contradiction by allowing multiplication, you could violate intuitions he can actually check about accessible physical objects, that he's convinced multiplication is OK.
:::::As to a specific example, I think he explicitly says that there is no justifiable way to get to <math>2^{2^{65536}}</math>. But it has been quite a long time since I looked at his stuff, so you may be more up-to-date on that. --[[User:Trovatore|Trovatore]] ([[User talk:Trovatore|talk]]) 21:13, 1 February 2010 (UTC)
::::::I believe predicative arithmetic is something like PRA but with a weaker induction schema, so you can't use it on arbitrary formulas, you can only use it on formulas with a certain syntactic characteristic that fits Nelson's concept of predicativity, and it turns out from this that multiplication is total. The crucial difference between multiplication and exponentiation is that multiplication is associative. It is pretty interesting stuff. I haven't tried to read his book but have read some of his expository papers from his site. This one is just 9 pages: [http://www.math.princeton.edu/~Nelson/papers/e.pdf]. [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 05:47, 2 February 2010 (UTC)
 
== Derivation of volume of a pyramid ==
 
I've seen a derivation which embedded three pyramids of demonstrably equal volume into a prism, demonstrating that the volume of each is one third that of the prism? I looked for the derivation online, and couldn't find it. Does anyone know if there's a Wikipedia article or other website that illustrates this derivation?
 
Thanks, --[[Special:Contributions/129.116.47.49|129.116.47.49]] ([[User talk:129.116.47.49|talk]]) 18:48, 1 February 2010 (UTC)
 
:You can draw one yourself. Label A, B, and C the corners of the triangular base of a prism, and A', B', and C' the corresponding corners of the opposite triangular face. Then mark out three pyramids: The one linking A, B, C, and A', the one linking A', B, C, and B', and the one linking A', B', C', and C. You can show these are all the same volume if you assume that skewing a solid shape doesn't change its volume. The pyramid A'B'C'C can be made into a reflection of ABCA' by sliding the corner C to the point A, so they have the same volume. The pyramid A'B'CB you can slide the corner C to the point C', then the corner B to the point A and the pyramid is congruent to A'B'C'A, which is the same volume as ABCA'. [[User:Black Carrot|Black Carrot]] ([[User talk:Black Carrot|talk]]) 19:17, 1 February 2010 (UTC)
Cut a cube into three square based pyramids having a common summit. [[User:Bo Jacoby|Bo Jacoby]] ([[User talk:Bo Jacoby|talk]]) 22:44, 1 February 2010 (UTC).
 
== What are the formulas for a [[Mercator Projection]]? ==
 
The article didn't have formulas for these situations. Perhaps they should be added.
 
Let's say Theta is the distance in miles (or kilometers) between two points on the same line of latitude. Let's also say Theta sub zero (in inches or centimeters) is the distance Theta on a mercator map at the equator. For latitude Phi, what is the distance in inches or centimeters for Theta miles (or kilometers) compared to Theta sub zero?
 
A related question: what is the distance Phi in inches (centimeters) representing the distance between two lines of latitude Theta sub one and Theta sub two, both either above or below the equator?
 
And then what is the Pythagorean theorem? Start at latitude Phi sub one and end at Phi sub two (both above or below the equator), start at longitude Theta sub one and end at Theta sub two?
 
I promise I'm not in school and haven't been for twenty-five years. I saw a Mercator map on TV and just started wondering, and these formulas don't appear to be in the article.[[User:Vchimpanzee|<font color="Green">Vchimpanzee</font>]]&nbsp;'''·''' [[User talk:Vchimpanzee|<span style="color: orange"> talk</span>]]&nbsp;'''·''' [[Special:Contributions/Vchimpanzee|<span style="color: purple">contributions</span>]]&nbsp;'''·''' 19:02, 1 February 2010 (UTC)
 
:I think you mean "for latitude Phi" rather than longitude in your first question, and I think the answer is just [[cosecant|csc]] φ times θ<sub>0</sub>. The second answer is also given by the difference between cosecants. I don't think the Pythagorean theorem applies. [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 19:58, 1 February 2010 (UTC)
::You are correct on the latitude. I fixed it. Thanks.[[User:Vchimpanzee|<font color="Green">Vchimpanzee</font>]]&nbsp;'''·''' [[User talk:Vchimpanzee|<span style="color: orange"> talk</span>]]&nbsp;'''·''' [[Special:Contributions/Vchimpanzee|<span style="color: purple">contributions</span>]]&nbsp;'''·''' 20:42, 1 February 2010 (UTC)
 
:::The scaling factor for distances measured along lines of constant latitude &phi; (horizontal lines on the map) is 1 / cos(&phi;), also known as sec(&phi;) - this gives a scaling factor that is 1 at the equator (&phi;=0) and approaches infinity as you approach the poles (&phi; = +/- 90 degrees). The vertical distance on the map between two points with the same longitude is more complex and depends on their respective latitudes - it is:
 
::::<math>\ln \left( \frac{\tan(\phi_1)+\sec(\phi_1)}{\tan(\phi_2)+\sec(\phi_2)} \right)</math>
:::[[User:Gandalf61|Gandalf61]] ([[User talk:Gandalf61|talk]]) 10:46, 2 February 2010 (UTC)
 
:::::Can these formulas be added to the article?[[User:Vchimpanzee|<font color="Green">Vchimpanzee</font>]]&nbsp;'''·''' [[User talk:Vchimpanzee|<span style="color: orange"> talk</span>]]&nbsp;'''·''' [[Special:Contributions/Vchimpanzee|<span style="color: purple">contributions</span>]]&nbsp;'''·''' 15:53, 2 February 2010 (UTC)
 
::::::They are already there, more or less - see [[Mercator Projection#Mathematics of the projection]]. [[User:Gandalf61|Gandalf61]] ([[User talk:Gandalf61|talk]]) 16:01, 2 February 2010 (UTC)
 
Okay, the new formula may be. The ones above it aren't. I just noticed lambda was used for longitude. When I took a math class that dealt with related issues, we used Theta and Phi.[[User:Vchimpanzee|<font color="Green">Vchimpanzee</font>]]&nbsp;'''·''' [[User talk:Vchimpanzee|<span style="color: orange"> talk</span>]]&nbsp;'''·''' [[Special:Contributions/Vchimpanzee|<span style="color: purple">contributions</span>]]&nbsp;'''·''' 18:46, 2 February 2010 (UTC)
 
:In [[spherical coordinate system]]s mathematicians traditionally use &theta; for elevation and &phi; for azimuth. In a [[geographic coordinate system]], on the other hand, cartographers use &phi; for latitude and &lambda; for longitude. [[User:Gandalf61|Gandalf61]] ([[User talk:Gandalf61|talk]]) 09:41, 3 February 2010 (UTC)
 
== Capacitance between 3 concentric spherical shell capacitors ==
 
Hi all,
 
I was just wondering if I could get a quick answer to this: if I have 3 spherical shells (concentric), at radii a, b and c (a < b < c), then how do I calculate the capacitance of the system? The shells are at potentials (respectively) 0, V, 0, and I've managed to obtain a general formula for both the potential and the electric field: I'm just not really sure what formula I use to calculate C - I know C=Q/V in 2-capacitor situations, but what do I do here? Do i treat a-b and b-c as 2 separate capacitor pairs and then add their capacitances after, for example, or what?
 
Many thanks - no great detail of explanation is needed, I just need to know how I should be calculating it so don't go out of your way with a long answer!
 
[[User:Otherlobby17|Otherlobby17]] ([[User talk:Otherlobby17|talk]]) 22:16, 1 February 2010 (UTC)
:Sounds like two capacitors in series, described by the usual formula. Maybe I'm missing something. [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 00:23, 2 February 2010 (UTC)
 
:Capacitance is always defined between precisely ''two'' points: you add +Q ''here'' and -Q ''there'', measure the voltage between ''those two'' points (more precisely, its change when you added the charges), and divide. The ''two'' is very important; when we speak of the "capacitance of an object" (like a capacitor), we're implicitly talking about the capacitance between its two terminals. When we speak of the capacitance of "two capacitors in series", we mean the capacitance between the two terminals that aren't connected to the other (constituent) capacitor. When we speak of the capacitance of one electrically-connected object (like a sphere), we usually mean the capacitance between it and "infinity" (the limit of the capacitance between it and an enclosing sphere whose radius grows without bound). (The common components called capacitors do not involve a "capacitor pair"; the word you may be looking for for "half a capacitor" is "plate".)
:In your case, my guess (based on the potentials you mentioned) would be that it's the capacitance between the ''two'' spheres and the middle sphere. Since we're treating them as one object, we're constraining them to have the same voltage in this thought experiment; that's equivalent to running a very thin wire between them (through a tiny hole in the middle sphere). So it's just Q/V again in your case, bearing in mind that the charges on the plates are ''not'' -Q/+Q/-Q, but are rather a/+Q/b where <math>a+b=-Q</math>. --[[User:Tardis|Tardis]] ([[User talk:Tardis|talk]]) 02:23, 2 February 2010 (UTC)
::Thanks ever so much, that's been a great help :) [[User:Otherlobby17|Otherlobby17]] ([[User talk:Otherlobby17|talk]]) 22:47, 3 February 2010 (UTC)
 
= February 2 =
 
== Asymptotics of Euler Numbers ==
 
I was looking at the Taylor Series of [[hyperbolic function]]s, particularly of sech(x). I am confused now about the [[Euler number]]s, which are defined by
<math>\operatorname {sech}(x)= \sum \frac{E_n}{n!}x^n </math>. This series is supposed to have radius of convergence <math>\frac{\pi}{2}</math>.
 
The Euler number article claims <math> |E_{2n}| > 8 \sqrt{\frac{n}{\pi}} \left( \frac{4n}{e \pi} \right)^{2n}</math>. It would all make sense to me with a factorial term instead of that n to the 2n term-- isn't that way too powerful? Won't that overwhelm the n factorial in the denominator of the Taylor series, along with the remaining exponential pieces, so that terms of the Taylor series grow arbitrarily large for any nonzero x?
 
What am I missing here? [[Special:Contributions/207.68.113.232|207.68.113.232]] ([[User talk:207.68.113.232|talk]]) 02:16, 2 February 2010 (UTC)
 
:You should--I surmise--be able to see that the answer to your question is No--and be able to derive the radius of convergence--from reading [[Stirling's approximation]].[[User:Julzes|Julzes]] ([[User talk:Julzes|talk]]) 03:14, 2 February 2010 (UTC)
 
:Perhaps you're missing the subscript <math>2n</math> (not <math>n</math>) on the left-hand side, as I first did? —[[User:Bkell|Bkell]] ([[User talk:Bkell|talk]]) 07:18, 2 February 2010 (UTC)
 
:Details. Notice that you can very easily derive an asymptotics on the coefficients of ''f(z)''=sech(z) by a standard procedure. The poles of ''f(z)'' are the solutions of exp(2''z'')=-1, that is ''z<sub>n</sub>''=iπ(2n+1)/2 forall ''n''&isin;'''Z'''. The values n=0 and n=-1 give the poles of minimum modulus π/2, which is therefore the radius of convergence for the expansion of f(z) at z=0. So you can write f(z)=a/(z-iπ/2) + b/(z+iπ/2) + h(z), where a,b are respectively the residues of f(z) at iπ/2 and -iπ/2 (here notice that since f(z) is an even function so is its principal part, and it has to be ''a=-b''), and the function h(z) has a larger radius of convergence, actually 3π/2, corresponding to the next poles of f(z) from the origin. As a consequence the coefficients of h(z) have a growth O((2/3π)<sup>n</sup>), and the coefficients of the power series expansion of f(z) at z=0 are asymptotically those of its principal part, a/(z-iπ/2) + b/(z+iπ/2) = 2az<sub>0</sub>/(z<sup>2</sup>-z<sub>0</sub><sup>2</sup>), which is a geometric series. Note also that the residue of f(z) at z<sub>0</sub>) is the limit of (z-z<sub>0</sub>)f(z) as z→z<sub>0</sub>, that is, the reciprocal of the limit of cosh(z)/(z-z<sub>0</sub>), and the last limit is the derivative of cosh(z) at z=z<sub>0</sub>. To get E<sub>n</sub> of course you have to multiply by n! using the Stirling formula. So now you should be able to compute that formula and even more precise asymptotics if you wish (consider the Laurent expansions at the next poles). Also note that the rough estimate E<sub>n</sub>=O(n!(2/π)<sup>n</sup>) is immediately available once you know the minimum modulus of the poles, and that the fact that the E<sub>n</sub> vanish for odd ''n'' is a consequence of f(z) being even.--[[User:PMajer|pm]][[User talk:PMajer|<span style="color:blue;">a</span>]] 09:11, 2 February 2010 (UTC)
:Actually in this case the residues of f(z) at all poles are easily computed, giving rise to a classic convergent series of the form <math>\scriptstyle \operatorname{sech}(z)=\sum_{k=1}^{\infty} \frac{c_k}{1+(2z/(2k+1)\pi)^2}</math> (I couldn't find it in wikipedia but I'm sure it's there). Then you can expand each term and rearrange into a power series within the radius of convergence π/2. This gives an exact expression for the coefficients of the power series expansion of sech(z); in particular you may derive more refined asymptotics and bounds. --[[User:PMajer|pm]][[User talk:PMajer|<span style="color:blue;">a</span>]] 12:26, 2 February 2010 (UTC)
 
:Thanks! (from the OP) I knew the nearest poles would be pi over 2 away. Stirling's approximation is precisely what I was missing. [[Special:Contributions/146.186.131.95|146.186.131.95]] ([[User talk:146.186.131.95|talk]]) 13:17, 2 February 2010 (UTC)
::Good. Btw, following the above lines one immediately finds the exact expression for E<sub>n</sub> in terms of "S<sub>n</sub>" reported [[Bernoulli numbers#The relation to the Euler numbers and π|here]]. --[[User:PMajer|pm]][[User talk:PMajer|<span style="color:blue;">a</span>]] 14:15, 2 February 2010 (UTC)
 
== P-value: What is the connection between significance level of 5% and likelihood of 30% ==
 
In the Wikipedia article [[P-value]] it says:
 
"Generally, one rejects the null hypothesis if the p-value is smaller than or equal to the significance level,[1] often represented by the Greek letter α (alpha). If the level is 0.05, then results that are only 30% likely or less are deemed extraordinary, given that the null hypothesis is true."
 
 
This confuses me. I thought that
 
-if the significance level is 0.05, results with a p-value of 0.05 or less are deemed extraordinary enough,
 
and that
 
-a p-value of 0.05 means that the results are 5% likely (to have arisen by chance, considering that the null hypothesis is true), and not 30%.
 
[[User:Georg Stillfried|Georg Stillfried]] ([[User talk:Georg Stillfried|talk]]) 14:56, 2 February 2010 (UTC)
 
:This is probably an error in the article. A p-value of 5% means that the probability of observing what you observed when, in fact, the null hypothesis is true is 5%. [[User:Wikiant|Wikiant]] ([[User talk:Wikiant|talk]]) 15:00, 2 February 2010 (UTC)
::Uncaught vandalism from the 11th of January. Fixed now. [[User talk:Algebraist|Algebraist]] 15:03, 2 February 2010 (UTC)
:::Thanks [[User:Georg Stillfried|Georg Stillfried]] ([[User talk:Georg Stillfried|talk]]) 15:48, 2 February 2010 (UTC)
 
== Comparing vectors ==
 
I've been writing a survey paper for a few months and I want to see if there are any other areas of research I can include. The topic is comparing vectors. In this realm, a vector is a set of discrete items in a specific order. The first one is always the first one. The vectors can grow, so a new last one can be added at any time. I've covered a lot of research into comparing the vectors using a cosine function and using Levenshtein-based algorithms. I've tried to find adaptations of BLAST/FASTA used in protein strands, but found nothing. Is there a fundamental method of comparing vectors that I'm missing? There has to be more than two methods. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 15:22, 2 February 2010 (UTC)
:Are these vectors supposed to be representing something specific? What do you want to achieve by comparing them? What comparison methods are sensible will depend crucially on these things. [[User talk:Algebraist|Algebraist]] 15:39, 2 February 2010 (UTC)
 
::By "discrete", I mean that a value in one vector indicates the same thing as that value showing up in another vector. Some examples: vectors of URLs visited by users. Vectors of UPC codes on foods purchased by customers. Vectors of numbers showing up on a lottery. The values have meaning, but what is being compared is the similarity (or lack of similarity) of vectors. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 15:43, 2 February 2010 (UTC)
 
:I should have clarified that by stating "survey paper", I am interested in bad methods of comparison as well as optimal methods. I already have over 200 pages of detail on methods I've studied and plan to add another 300 pages or so. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 15:58, 2 February 2010 (UTC)
:The first problem is that what you are talking about is not really a [[Vector space|vector in the sense most commonly used in mathematics]]. It is really a sequence; or a multiset, if order is not important; or a set, if repetition is impossible\not important. -- [[User:Meni Rosenfeld|Meni Rosenfeld]] ([[User Talk:Meni Rosenfeld|talk]]) 16:46, 2 February 2010 (UTC)
 
:I found a good survey [http://www.dcs.shef.ac.uk/~sam/stringmetrics.html here] with a couple algorithms that I haven't studied (yet). From these, I expect to find a few more algorithms that I can include in my survey. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 05:47, 3 February 2010 (UTC)
::Order is very important (the main point) and repetition is expected. Therefore, it is not a set. Each of the sequences has an origin that does not change (the first item) and continues to the next item and the next item and the next item... In computer science (where the comparison theories are applied), they are called arrays. I don't know of any concept of arrays in mathematics. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 16:53, 2 February 2010 (UTC)
:::I think a finite [[Sequence]] is the exact analog of an array. -- [[User:Meni Rosenfeld|Meni Rosenfeld]] ([[User Talk:Meni Rosenfeld|talk]]) 17:14, 2 February 2010 (UTC)
::Searching for "sequence similarity" brings up bioinformatics (BLAST/FASTA), which I've already covered in depth. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 16:56, 2 February 2010 (UTC)
 
:I don't know if you're going to get a good answer because the question is sort of vague. The method you would want for comparing two sequences simply depends on how you might want to define closeness. You could really pick any function you wanted. If you're looking for commonly used functions, that's tied to what the sequences are common used to represent. Besides proteins and DNA sequences, strings of words, or vectors in some n dimensional space, what might you want to represent with sequences and compare? [[User:Rckrone|Rckrone]] ([[User talk:Rckrone|talk]]) 06:32, 3 February 2010 (UTC)
 
::I am purposely making it vague because I'm not interested in what the similarity is measuring. I am collecting, categorizing, and describing in high detail as many methods for comparing the similarity of sequences as possible. I'm focusing on sequences of FILL_IN_THE_BLANK in time right now. I haven't found a lot of methods that take time ordering into consideration. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 06:37, 3 February 2010 (UTC)
 
== Units Problem ==
 
I'm in an intro physics class and completely confused about a unit conversion problem---any help, not the the answer, but pointing me in the right direction would be appreciated!
 
 
 
Suppose x=ay^(3/2) where a=7.81 g/Tm. Find the value of y when x=61.7 Eg (fm)^2/(ms)^3
 
 
note that that's femtometers squared OVER miliseconds cubed
 
 
I'm just so confused as to how to combine these units! [[Special:Contributions/209.6.54.248|209.6.54.248]] ([[User talk:209.6.54.248|talk]]) 17:23, 2 February 2010 (UTC)
 
:First solve the equation for y using algebra. Then see what that does to the units. [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 17:56, 2 February 2010 (UTC)
 
:OP here, I've solved for y using algebra to come up with y= cube root of (3806.89 x 10³⁶ g² f⁴ m⁴ Ym²) ALL DIVIDED BY cube root of (60.9961 g² m⁶ s⁶)
 
I'm still stuck![[Special:Contributions/209.6.54.248|209.6.54.248]] ([[User talk:209.6.54.248|talk]]) 19:37, 2 February 2010 (UTC)
 
::First change the units to metres and seconds, then you can divide the numbers, and you can cancel units that occur in both numerator and denominator before taking the cube root of the whole expression as the last step. (Divide powers by 3 to get the cube root). I'm puzzled by the units you give in the question. Could you explain them in words? What are "f" & "Y" in your answer? Perhaps it would help if you looked at some really simple examples first. [[User:Dbfirs|''<font face="verdana"><font color="blue">D</font><font color="#00ccff">b</font><font color="#44ffcc">f</font><font color="66ff66">i</font><font color="44ee44">r</font><font color="44aa44">s</font></font>'']] 21:41, 2 February 2010 (UTC)
:::All problems of change of unit use the same principle, as in the simple example of 6 secs to be converted to millisecs. 6 secs X (millisecs per sec) = 6 X 1000 = 6000 millisecs. Note how the "unit A per unit B" acts as a fraction to cancel the multiplying "unit B". This conversion can be done in both numerator and denominator, so that g/sec could be changed to kg/min by applying the separate factors 1000 and 60, as appropriate.→[[Special:Contributions/86.152.78.134|86.152.78.134]] ([[User talk:86.152.78.134|talk]]) 23:23, 2 February 2010 (UTC)
 
= February 3 =
 
== Transcendental Galois extension ==
Is there such a thing? I'm especially unsure about the possible topology. It cannot be (as I understand) the usual profinite one (since a Galois group may not be compact.) Algebraically, there is a separability for transcendental extensions (due to Mac Lane?) but don't know if it is a good idea to define Galois = (Mac Lane) separable + normal, where "normal" being the usual one. I would be delighted if you know something more. -- [[User:TakuyaMurata|Taku]] ([[User talk:TakuyaMurata|talk]]) 02:43, 3 February 2010 (UTC)
 
:I've encountered no such notion myself and Wikipedia seems unaware of it, but it doesn't seem impossible. The article [[Galois extension]] characterizes a Galois extension as an algebraic extension ''K''/''F'' whose automorphism group has fixed field ''F''; if we drop the condition of being algebraic, the definition still seems sensible on the face of it. For example, take ''F'' as a field of characteristic other than 2, and consider the transcendental extension <math>F(X)/F</math>; the automorphism of <math>F(X)</math> that fixes ''F'' and maps ''X'' to -''X'' has fixed field ''F'', so therefore that extension would be Galois. Plausibly such a notion might be useful. Others? Eric. [[Special:Contributions/131.215.159.171|131.215.159.171]] ([[User talk:131.215.159.171|talk]]) 06:31, 3 February 2010 (UTC)
::Are you sure about your example? The polynomial <math>p(X)=X^2</math> is mapped onto itself under the automorphism you mentioned, and thus should lie in the automorphism's fixed field. [[User:Point-set topologist|<font color="#000000">PS</font>]][[User talk:Point-set topologist|<font color="#000000">T</font>]] 13:04, 3 February 2010 (UTC)
:::Indeed. The fixed field in this case is F(''x''<sup>2</sup>), and the extension F(''x'')/F(''x''<sup>2</sup>) is algebraic. —&nbsp;[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 13:22, 3 February 2010 (UTC)
::The whole point of Galois extensions is that they satisfy the [[fundamental theorem of Galois theory]], i.e., there is a dual correspondence of intermediate extensions to closed subgroups of the Galois group. If the extension is transcendental, merely requiring that Fix(Aut(''K''/''F'')) = ''F'' comes nowhere near ensuring this goal for whatever definition of "closed" (thought I can't remember the specific counterexample ATM), and is unlikely to be a very useful property by itself. —&nbsp;[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 13:18, 3 February 2010 (UTC)
:::Well, of course you could argue that for the purposes of "pure field theory", the existence of a Galois connection should be somehow related to the notion of a "Galois extension", and I agree. But field extensions also occur often in other branches of mathematics, and although an example does not immediately come to mind (I vaguely recall one, and if I remember, I will note it here), perhaps some definition of "Galois extension" for transcendental extensions may be useful in algebraic geometry (for instance). If Taku is looking for an example of how to use the theory of transcendental extensions in Galois theory, one striking example is the fact that if ''F'' is a purely transcendental finitely generated extension of <math>\mathbb{Q}</math>, and if ''E'' is Galois over ''F'' (in the usual sense), there is a Galois extension ''K'' of <math>\mathbb{Q}</math>, such that the Galois group of ''K'' over <math>\mathbb{Q}</math> is isomorphic to the Galois group of ''E'' over ''F''; most standard proofs of this fact involve [[Hilbert's irreducibility theorem]]. The result may be of interest if you were looking for connections between "transcendental extensions" and "Galois theory". [[User:Point-set topologist|<font color="#000000">PS</font>]][[User talk:Point-set topologist|<font color="#000000">T</font>]] 13:38, 3 February 2010 (UTC)
 
Well, I wasn't thinking anything fancy. I had a very innocent example like <math>\mathbf{C}/\mathbf{Q}</math>. (It's not a Galois extension since it's not algebraic.) I thought, in application, it makes sense to start with the top field <math>\mathbf{C}</math>; since this way you can apply analytic results (e.g., Lie groups). The problem is that there are transcendental elements when we go down. (Or maybe I'm missing the story completely.) Hence, my somehow rhetorical question. I think we can agree that It would be ''nice'' if there ''is'' such a thing as a transcendental Galois extension. (I don't know any possible applications to algebraic geometry that PST mentioned.) Of course it is possible to define a extension to be Galois by the closure property: i.e., <math>F^{**} = F</math> where ''*'' means taking a Galois group and taking fixed field respectively. As Emil. J pointed out, such a definition is vacuous. (And, if I understand correctly, if we require a Galois group to be profinite and further assume that there is a Galois connection, then it would follow that the extension is algebraic (since the union of finite extensions is algebraic.) -- [[User:TakuyaMurata|Taku]] ([[User talk:TakuyaMurata|talk]]) 02:23, 4 February 2010 (UTC)
 
== Smooth maps ==
{{resolved}}
 
I'm working on a problem where I am supposed to give necessary and sufficient conditions for a map, f, from one smooth manifold on the reals to another is a smooth map. With very little work, I showed that this is equivalent to showing <math>f^3</math> is smooth in the normal calculus sense. Here smooth means <math>C^\infty</math>. Now, do you think my answer should be that the cube of f is smooth on the reals? I mean that is necessary and sufficient, but I don't know if there's more that can be said, like "Every such function would look like ...". I can see that products and sums of smooth functions product smooth functions, so if f is smooth, then <math>f^3</math> should be smooth. But, the opposite is not true as <math>f(x) = x^{1/3}</math> is not smooth but its cube is. Any thoughts? Thanks. [[User:StatisticsMan|StatisticsMan]] ([[User talk:StatisticsMan|talk]]) 05:03, 3 February 2010 (UTC)
 
I asked my professor and he said that is all he is looking for. [[User:StatisticsMan|StatisticsMan]] ([[User talk:StatisticsMan|talk]]) 21:02, 3 February 2010 (UTC)
 
== Induction ==
 
Everyone knows how to do induction and recursive definition on well-ordered sets. Is there a generalized notion of well ordering to partially ordered sets so we can do things similar to induction and recursion on them? [[User:Money is tight|Money is tight]] ([[User talk:Money is tight|talk]]) 07:37, 3 February 2010 (UTC)
Nvm I found what I was searching for in Well-founded_induction [[User:Money is tight|Money is tight]] ([[User talk:Money is tight|talk]]) 07:46, 3 February 2010 (UTC)
:I'd say the [[Zorn lemma]]. --[[User:PMajer|pm]][[User talk:PMajer|<span style="color:blue;">a</span>]] 09:36, 3 February 2010 (UTC)
 
== Winning eight games out of ten ==
 
Sometimes I play a series of ten [[Reversi]] games online against ten different opponents, selected at random. If I win eight of the ten games, and there are no draws, could I use that to calculate or estimate at what percentile in the Reversi-player ability range I am? Ignoring that its a small sample size. Thanks. [[Special:Contributions/78.146.251.66|78.146.251.66]] ([[User talk:78.146.251.66|talk]]) 12:27, 3 February 2010 (UTC)
:If on an average you win eight of the ten games you play, and if everything stated in your question is assumed, you should effectively defeat 80% of the population in Reversi. But that implies that your percentile rank is 80. [[User:Point-set topologist|<font color="#000000">PS</font>]][[User talk:Point-set topologist|<font color="#000000">T</font>]] 12:58, 3 February 2010 (UTC)
 
:Given that you had a uniform chance of any ranking beforehand the probability density afterwards of where you are from 0 to 1 is as ''x''<sup>8</sup>(1-''x'')<sup>2</sup> normalized so it all adds up to to 1 which is the [[Beta distribution]] with parameters 9 and 3. That expression is ''x''<sup>8</sup>-2''x''<sup>9</sup>+''x''<sup>10</sup>. The integral is ''x''<sup>9</sup>/9-2''x''<sup>10</sup>/10+''x''<sup>11</sup>/11. Its integral between 0 and 1 is 1/9-2/10+1/11=2/990 so thats what you divide by to get your final result. The limits of the 8th percentile are I believe 0.7 and 0.8 but you might want something else. You work out the integral at the two end figures, subtract and divide by that total between 0 and 1. And that gives your chance of being in that percentile. But I'm afraid I can't do that in my head. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 13:42, 3 February 2010 (UTC)
::BTW by that article the average ([[arithmetic mean]]) of your ranking is 75% and the most likely value ([[mode (statistics)]])) is 80%. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 13:51, 3 February 2010 (UTC)
::No, no, no. All these calculations confuse "percentile of Reversi ability" with "probability to defeat a random opponent".
::First, playing ability probably cannot be placed on a one-dimensional scale. It's very possible to have cycles where player A dominates B (defeats him with probability > 0.5), B dominates C and C dominates A. It's possible that the world champion of reversi, which dominates all other powerful players, is so confused by the cluelessness of weaker players that he beats them less often than he should. Thus the player in the 100th percentile would have a relatively low probability to defeat a random opponent.
::Even if we ignore this and choose "probability to defeat a random opponent" as our measure for ability, this quantity will only be monotonous with percentile, not identical to it. It's very possible that the best player, at the 100th percentile, is only able to defeat a random opponent 70% of the time.
::So we see that the OP's 80% winning record, even if measured for a large sample, does not tell us what the percentile is. It is possible the he is really the world champion, and it is possible that he is on the 60th percentile (I think it can't be lower).
::For estimating "probability to defeat a random opponent", Dmcq gives the right ideas, but the prior needn't be uniform. It depends on the distribution of this parameter among all players, and your objective estimates about your own ability. -- [[User:Meni Rosenfeld|Meni Rosenfeld]] ([[User Talk:Meni Rosenfeld|talk]]) 15:44, 3 February 2010 (UTC)
:::Sorry you're quite right, it doesn't give the percentiles at all and one would need a lot more information to do that. Thanks for pointing that out for me, very silly. [[User:Dmcq|Dmcq]] ([[User talk:Dmcq|talk]]) 17:10, 3 February 2010 (UTC)
 
::::Any attempt to assess your relative ranking depends upon some other things that aren't being stated. First and foremost, if you are playing where there are ratings, you should use that as a guide rather than your performance in specific games against players who may be at either end of the spectrum. Generally, stronger players will avoid playing much weaker ones so as to waste little of their time. If you played a truly random selection of opponents, the percentage of games you would win would roughly match up with the percentage of players you are stronger than; but, for example, in chess the player generally considered strongest historically, [[Gary Kasparov]], only had about a 70-30 record, since he only played other chess players in the top hundredth of a percentile through most of his career. [Incidentally, one of the strongest Reversi/Othello players is [[Imre Leader]], the godson of [[Imre Lakatos]] (recently mentioned at this desk for his book, [[Proofs and Refutations]]).]
 
::::One thing said here is absolutely false. A (very) strong player will beat a weak one 100% of the time unless s/he falls asleep during the game. It's not a subject for this desk, but it shouldn't go unchallenged.[[User:Julzes|Julzes]] ([[User talk:Julzes|talk]]) 17:59, 3 February 2010 (UTC)
:::::You misunderstood me. I have no knowledge about Reversi, and I wasn't trying to make a statement about how likely a strong Reversi player is to beat a weak one. I was talking about games in general, using "Reversi" as a placeholder. It requires specific domain knowledge to show that Reversi does not exhibit any of the scenarios I mentioned (if this is so).
:::::For proffesionally played games that do not involve randomness, the probability in question is indeed usually close to 100%; for games that do it is usually less.
:::::"The percentage of games you would win would roughly match up with the percentage of players you are stronger than" is false in general, and a strong statement about Reversi in particular. -- [[User:Meni Rosenfeld|Meni Rosenfeld]] ([[User Talk:Meni Rosenfeld|talk]]) 19:40, 3 February 2010 (UTC)
 
::::::Reversi doesn't use dice or cards. The question was not about games with randomness, but about random selection of opponents. The matchup between proportion of games won and percentile of skill is greater the more highly graded skill-levels are in a game without randomness. In general, one won't enjoy success over opponents one is stronger than 100% of the time, so it is certainly true that the matchup is not perfect. It is close, and, given a large sample of games against truly randomly chosen opponents, I think it's probably the best estimator for someone in the middle ranks (I don't know if this question has been researched). This, however, probably excludes players who only barely know the game. Someone with only a modicum of skill may very well have a hard time getting good results against a rank beginner (somewhat as you said about strong versus weak players). At any rate, one thing that can be argued is that all games exhibit some randomness, to the extent that weak players may choose their moves with no more skill (and sometimes less) than a random-move selector would. And also, as I said, opponent selection cannot possibly be random and the rating systems that are available are a better guide to determining skill level. There is a lack of transitivity in the ranking question as well, and I think this is the point that Mr. Rosenfeld was trying to get across. Such things as variations in styles of play and specific preparation for specific opponents can either make comparisons impossible or yield false results. In some cases comparisons can be made but are not made well with head-to-head results, and in others comparison is effectively impossible. And then there is the question of which game of Reversi or chess one is talking about, from 1 minute per game to postal.[[User:Julzes|Julzes]] ([[User talk:Julzes|talk]]) 21:36, 3 February 2010 (UTC)
 
:The article about the [[Elo rating system]] (used in chess and tennis) might be of some help here. [[Special:Contributions/66.127.55.192|66.127.55.192]] ([[User talk:66.127.55.192|talk]]) 18:07, 3 February 2010 (UTC)
 
== distance of a hyperplane from the origin and the norm ==
 
My question is this: Show that the norm ||f|| of a bounded linear functional f (non-zero) on a normed space X can be interpreted geometrically as the reciprocal of the distance D = inf{||x||: f(x)=1} of the hyperplane H = {x : f(x)=1} from the origin. Firstly what is the meaning of hyperplane? The book I am reading doesn't define hyperplane, it defines hyperplane parallel to a subspace Y as an element of X/Y. Secondly, how should I prove the result? Thanks.-[[User:Shahab|Shahab]] ([[User talk:Shahab|talk]]) 13:30, 3 February 2010 (UTC)
:A hyperplane is an affine subspace of codimension 1. In other words, it's a set of the form {x : f(x)=a} for some nonzero linear functional f and scalar a. What do you mean by "recipient"? [[User talk:Algebraist|Algebraist]] 15:24, 3 February 2010 (UTC)
::"Recipient" should perhaps be "reciprocal" ? [[User:Gandalf61|Gandalf61]] ([[User talk:Gandalf61|talk]]) 15:29, 3 February 2010 (UTC)
:::Yes, that was a mistake. Corrected now. So how do I proceed? Also I believe can think of H as x/f(x)+N where N=Null space of f and x is a fixed element in X-N. But in general how do I interpret a hyperspace as an element of X/Y (what element and what is the subspace Y.) Thanks-[[User:Shahab|Shahab]] ([[User talk:Shahab|talk]]) 17:10, 3 February 2010 (UTC)
::::''Y'' = {''x'': ''f''(''x'') = 0} = your ''N'', and ''H'' is literally an element of ''X''/''Y'' if the latter is defined in the obvious way as a set of equivalence classes, it's not necessary to "interpret" it. Anyway, all this talk about hyperplanes and ''X''/''Y'' is just a red herring. The result that ||''f''|| = 1/inf{||''x''||: ''f''(''x'') = 1} = sup{1/||''x''||: ''f''(''x'') = 1} follows fairly trivially from the definition of ||''f''|| = sup{|''f''(''x'')|: ||''x''|| = 1}, just show that the two suprema are taken over the same set (well, except for 0). —&nbsp;[[User:EmilJ|Emil]]&nbsp;[[User talk:EmilJ|J.]] 17:26, 3 February 2010 (UTC)
 
== Discrete Mathematics: Do quantifier orders matter? ==
 
There's a question in my discrete mathematics book that asks to write in English:
<blockquote>
<math> \forall x \exists y P(x,y)</math>
</blockquote>
and then asks us to write:
<blockquote>
<math> \exists y \forall x P(x,y)</math>
</blockquote>
 
My question is: are these equivalent?
 
Thanks for the help!
[[User:Sebsile|Sebsile]], an alternate account of [[User:Saebjorn|Saebjorn]] 16:17, 3 February 2010 (UTC)
:Try letting your quantifiers range over people, and letting P(x,y) mean "x loves y". [[User talk:Algebraist|Algebraist]] 16:19, 3 February 2010 (UTC)
:If, unlike [http://xkcd.com/ xkcd], you don't want to mix math and romance, try <math>P(x,y)\equiv x<y</math>. -- [[User:Meni Rosenfeld|Meni Rosenfeld]] ([[User Talk:Meni Rosenfeld|talk]]) 16:25, 3 February 2010 (UTC)
:What about this (a kind of variation on Algebraist's example): "for any [[nut (hardware)|x]] there exists a [[wrench|y]] that can screw x" vs "there exists a [[adjustable spanner|y]] that can screw any x". [[User:PMajer|pm]][[User talk:PMajer|<span style="color:blue;">a</span>]] 17:09, 3 February 2010 (UTC)
 
'''No''' they're '''not equivalent'''. In the first case the statement is true even if the ''y''-value is different for different ''x''-values. In the second case the statement is not true unless the '''same''' ''y''-value works regardless of what the ''x''-value is.
:
This is precisely the difference between [[pointwise convergence]] and [[uniform convergence]]. [[User:Michael Hardy|Michael Hardy]] ([[User talk:Michael Hardy|talk]]) 19:52, 3 February 2010 (UTC)
 
::Is there still a distinction between <math> \exists y :\forall x P(x,y)</math> and <math> \exists y \forall x :P(x,y)</math>? It's a long time since I used these symbols. [[User:Dbfirs|''<font face="verdana"><font color="blue">D</font><font color="#00ccff">b</font><font color="#44ffcc">f</font><font color="66ff66">i</font><font color="44ee44">r</font><font color="44aa44">s</font></font>'']] 22:37, 3 February 2010 (UTC)
:::Those colons, as far as I'm aware, mean absolutely nothing. So the difference persists. [[User talk:Algebraist|Algebraist]] 23:32, 3 February 2010 (UTC)
 
::::the colons are old-schoolish symbols for 'such that'. they are kind of unnecessary, so I think modern usage tends to drop them. but to answer the question in english (rather than mathese), the difference would be between ''for every X there exists (some) Y where P(x,y)'' as opposed to ''there exists a (particular) Y for all X where P(x,y)''. in the first case y can be different for different x's; in the second it's the same y for all x's. --[[User_talk:Ludwigs2|<span style="color:darkblue;font-weight:bold">Ludwigs</span><span style="color:green;font-weight:bold">2</span>]] 00:08, 4 February 2010 (UTC)
 
== Homoeomeric curves ==
 
I don't really need help with this, just thought it would be interesting. The ancient Greeks studied homoeomeric curves, that is curves for which any part can be made to coincide with any other part. Or in more modern language, a connected 1-manifold embedded in Euclidean space so that its symmetry group under isometries of Euclidean space is transitive. [[Geminus]] showed there are only three homoeomeric curves (in 3-space): the line, the circle, and the circular helix. It appears though that there is a fourth type of curve in 4-space and in general there are ''n'' types in ''n''-space. It appears that there are three types of homoeomeric surfaces in 3-space: the plane, the sphere, and the circular cylinder. It seems natural to ask, can the homoeomeric ''m''-submanifolds of Euclidean ''n''-space be classified?--[[User:RDBury|RDBury]] ([[User talk:RDBury|talk]]) 16:19, 3 February 2010 (UTC)
:For the case of curves in three-space, they can be classified in terms of the [[Frenet–Serret formulas]]: one with zero curvature (the line), one with nonzero curvature but zero torsion (the circle) and one with nonzero curvature and torsion (the helix). It seems likely that Jordan's extension to n dimensions will nicely classify the homoeomeric curves in dimension n. [[User talk:Algebraist|Algebraist]] 16:28, 3 February 2010 (UTC)
:: I'm not sure there are only 4 curves in 4D. Off the top of my head I can think of
:::<math>
\begin{pmatrix} at \\ bt \\ ct \\ dt \end{pmatrix}
\begin{pmatrix} at \\ bt \\ c \cos t \\ c \sin t \end{pmatrix}
\begin{pmatrix} 0 \\ 0 \\ c \cos t \\ c \sin t \end{pmatrix}
\begin{pmatrix} a \cos t \\ a \sin t \\ c \cos t \\ c \sin t \end{pmatrix}
\begin{pmatrix} a \cos nt \\ a \sin nt \\ c \cos mt \\ c \sin mt \end{pmatrix}
</math>
 
::The first three are line, spiral and circle - there's only one spiral as the first two components, ''at'''''e'''<sub>0</sub> + ''bt'''''e'''<sub>1</sub>, are orthogonal to the other two whatever the values of ''a'' and ''b''. Obviously ''a'' = ''b'' = 0 gives a circle, ''c'' = 0 a straight line.
 
::The last three (including the circle) are related to [[SO(4)#Geometry of 4D rotations|simple, isoclinic and double rotations]]: they come from thinking of the paths of points under those rotations, as such path will map to itself via that rotation and powers of it.
 
::The last gives more than one curve as values of m and n can be chosen to generate different closed curves, much like [[Lissajous curve]], except these are homoeomeric. E.g. ''m'' = 1, ''n'' = 2 is the simplest one that's different from the fourth/isoclinic one. In theory there are as many as there pairs or relatively prime (''m'', ''n''), i.e. infinitely many. Then there are a class of non-closed curves when ''m''/''n'' is not rational so the curve loops forever and is dense but never joins up.
 
::So it gets a lot more complex just for curves in four dimensions. I can't even think what will happen for surfaces, or for more general ''m''-manifolds in ''n''-dimensions, which gain far more degrees of freedom in spaces with far more complex transformations.
 
:::Actually scrub the fourth one. With a suitable change of basis it's just a circle radius <math>\sqrt{a^2 + c^2}</math>. I think my thinking on the paths generated by the general double rotation still makes sense though.--<small>[[User:JohnBlackburne|JohnBlackburne]]</small><sup>[[User_talk:JohnBlackburne|words]]</sup><sub style="margin-left:-2.0ex;">[[Special:Contributions/JohnBlackburne|deeds]]</sub> 20:42, 3 February 2010 (UTC)
 
== Clarification on 'Electric Potential Energy' article ==
 
Hi all,
 
further to my question on charged spheres as capacitors a few days ago, I was discussing electromagnetism with a friend a few years older than me and he pointed out that, in the derivation of the alternate formula for calculating energy ([[Electric_potential_energy| Electric Potential Energy - under 'Energy stored in an electrostatic field distribution']]), when we derive the |'''E'''|<sup>2</sup> formula for energy, we throw away a surface integral over a sphere of infinite radius because <math>\phi \to 0</math> as <math>r \to \infty</math>: however, as the potential tends to 0, it's also true that the surface area of the sphere tends to infinity.
 
Now I can appreciate the basic concept of what's going on here - as we take the limiting values of phi and the surface area for <math>r \to \infty</math>, phi tends to 0 sufficiently fast so that the integral over the surface tends to 0 despite the surface area becoming arbitrarily large, but what I want to know is '''why'''? I've consulted 2 textbooks (and the above article) on this, and the only answer I seem to be able to find is a mumbled 'oh, well, the potential just goes to 0 faster...' without any actual justification. Why can we be certain that the potential drops off sufficiently fast that the surface integral becomes negligible? Can anyone give me a proper answer without just hiding behind the fact that '0 times infinity = 0 in this case'?
 
I greatly appreciate any help you're able to provide, all I want is a proper justified answer or a decent explanation - many thanks, [[User:Otherlobby17|Otherlobby17]] ([[User talk:Otherlobby17|talk]]) 22:59, 3 February 2010 (UTC)
:Far from all charges (you have to assume the charge density is localized, or at least itself eventually falls off with radius "sufficiently fast"), they act like a single point charge, whose potential goes as <math>r^{-1}</math> and whose field goes as <math>r^{-2}</math>. The area of the bounding surface goes as <math>r^2</math>, so the product of the potential, the field, and the area goes as <math>r^{-1}</math> and vanishes as ''r'' grows without bound. (If the total charge is 0, the potential and field will drop off at an even faster rate determined by the precise geometry of the charges.) --[[User:Tardis|Tardis]] ([[User talk:Tardis|talk]]) 01:57, 4 February 2010 (UTC)
 
= February 4 =
 
== Scientific Notation ==
 
If 6.02x10e23 atoms of carbon have a mass of 12g, then what is the mass of 1 atom? Express your answer in scientific notation.
 
I don't even know where to start on this one. I'm pretty sure it has something to do with dividing the exponent and 6.02.
 
Explaining how you got your answer would be great.
 
[[Special:Contributions/174.112.38.185|174.112.38.185]] ([[User talk:174.112.38.185|talk]]) 01:58, 4 February 2010 (UTC)
 
:If two cars weigh 2 tonnes, how much does one car weigh? <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/129.67.39.49|129.67.39.49]] ([[User talk:129.67.39.49|talk]]) 02:07, 4 February 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
 
== Statistics course (titled Stochastic processes) ==
 
I'm taking a statistics course (titled stochastic process). It's like no other stats course I've taken previously because the prof covers in lecture many proofs and mathematical theorems. I've taken only calculus 1 to 3 and I don't have any background in proof. I don't know why but I also have proof-phobia. Proofs just never appealed to me or they were never possible for me to understand reproduce by myself. I don't know how I should ace this course. Even the homework is really hard. In my past stats courses, I prepared for exams by doing chapter review questions at the end of every chapter. But the prof's questions are nothing like the ones in the text. What should I do? <span style="font-size: smaller;" class="autosigned">—Preceding [[Wikipedia:Signatures|unsigned]] comment added by [[Special:Contributions/142.58.129.94|142.58.129.94]] ([[User talk:142.58.129.94|talk]]) 02:29, 4 February 2010 (UTC)</span><!-- Template:UnsignedIP --> <!--Autosigned by SineBot-->
 
:Is the course a requirement? If not, drop it. If the course is required, do any other professors teach it? Check to see if they are better suited to your skills. If so, switch courses. If it is required and he is the only professor, meet him after class - as much as possible - and ask tons and tons of questions. The more questions you ask, the more answers you will get. -- [[User:Kainaw|<font color='#ff0000'>k</font><font color='#cc0033'>a</font><font color='#990066'>i</font><font color='#660099'>n</font><font color='#3300cc'>a</font><font color='#0000ff'>w</font>]][[User talk:Kainaw|&trade;]] 02:31, 4 February 2010 (UTC)
 
:It's not a requirement. But if I drop the course now, I'll get no refund back for the course tuition (around $400). I'll also have "W" mark in my transcript. I don't have a single "W" right now, but I heard it doesn't look good.