Discrete Mathematics By Sandy Irani Answers
Science topic
Pure Mathematics - Science topic
Explore the latest questions and answers in Pure Mathematics, and find Pure Mathematics experts.
Questions related to Pure Mathematics
Is the reciprocal of the inverse tangent $\frac{1}{\arctan x}$ a (logarithmically) completely monotonic function on the right-half line? If $\frac{1}{\arctan x}$ is a (logarithmically) completely monotonic function on $(0,\infty)$, can one give an explicit expression of the measure $\mu(t)$ in the integral representation in the Bernstein--Widder theorem for $f(x)=\frac{1}{\arctan x}$? These questions have been stated in details at the website https://math.stackexchange.com/questions/4247090
- 307.60 KB
Relevant answer
From now on I will stop talking about the proof given by A. Venkata Lakshmi. I will turn my attention to find a real proof of the logarithmically complete monotonicity of the reciprocal $\frac1{\arctan x}$ on $(0,\infty)$. Thank you, bye bye.
Hello Can someone help me to solve this? Because I really don't know about these problems and still can't solve it until now But I am still curious about the solutions Hopefully you can make all the solutions Sincerely Wesley
-
IMG_20200531_1700
40_279.jpg
Relevant answer
Hi In what area was the issue raised? Euclidean space, Hilbert space, Banach space?
Many proposals for solving RH have been suggested, but has it been splved? What do you
Relevant answer
Interesting question for possible future discussions.
- 15.64 KB
Relevant answer
If we consider your problems as MCQ problems (I) and (ii) for undergraduate students, the correct answers are : (a) for problem(i) and (d) for problem(ii). PS. Observe that we have several correct choices in each case, but they are not included in the offered choices. Regards
More precisely, if the Orlik-Solomon algebras A(A_1) and A(A_2) are isomorphic in such a way that the standard generators in degree 1, associated to the hyperplanes, correspond to each other, does this imply that the corresponding Milnor fibers $F(A_1)$ and $F(A_2)$ have the same Betti numbers ? When A_1 and A_2 are in C^3 and the corresponding line arrangements in P^2 have only double and triple points, the answer seems to be positive by the results of Papadima and Suciu. See also Example 6.3 in A. Suciu's survey in Rev. Roumaine Math. Pures Appl. 62 (2017), 191-215.
Relevant answer
Your question is very interesting, Regards and the best wishes, Mirjana
We study about some laws for group theory and ring theory in algebra but where it is used.
Relevant answer
an application of ring theoty is geometry, for example check the geometrical properties of complex numbers ring or neutrosophic numbers
Can we apply the theoretical computer science for proofs of theorems in Math?
By dynamical systems, I mean systems that can be modeled by ODEs. For linear ODEs, we can investigate the stability by eigenvalues, and for nonlinear systems as well as linear systems we can use the Lyapunov stability theory. I want to know is there any other method to investigate the stability of dynamical systems?
Relevant answer
An alternative method of demonstrating stability is given by Vasile Mihai POPOV, a great scientist of Romanian origin, who settled in the USA. The theory of hyperstability (it has been renamed the theory of stability for positive systems) belongs exclusively to him ... (1965). See Yakubovic-Kalman-Popov theorem, Popov-Belevitch-Hautus criterion, etc. If the Liapunov (1892) method involves "guessing the optimal construction" of the Liapunov function to obtain a domain close to the maximum stability domain, Popov's stability criterion provides the maximum stability domain for nonlinearity parameters in the system (see Hurwitz , Aizerman hypothesis, etc.).
A careful reading of THE ABSOLUTE DIFFERENTIAL CALCULUS, by Tullio Levi-Civita published by Blackie & Son Limited 50 Old bailey London 1927 together Plato's cosmology strongly suggest that gravity is actually a real world mathematics or in another words is gravitation a pure experimental mathematics?
Relevant answer
Dear Javad Fardaei . Sorry for the delay.Good question. I think this is a matter for the future. Greetings, Sergey Klykov
Please share your Opinion. Mathematics is the queen of Sciences. It deals with the scientific approach of getting useful solutions in multifarious fields. It is the back bone of modern science. Ever since its inception it is going into manifold directions. Now in these days of advanced development, it is interlinked with every important branch of technical and modern science. Pure mathematics and Applied mathematics are two eyes of Mathematics. Both are having and playing an equal and significant role in the field of research.
Relevant answer
Mathematics is a backbone of all branches of knowledge.
Compute nontrivial zeros of Riemann zeta function is an algebraically complex task. However, if someone able to prove such an iterative formula can be used to get all approximate nontrivial using an iterative formula, then its value is limitless.How ever to prove such an iterative formula is kind of a huge challenge. If somebody can proved such a formula what kind of impact will produce to Riemann hypothesis? . Also accuracy of approximately calculated non trivial accept as close calculation to non trivial zeros ? Here I have been calculated and attached first 50 of approximate nontrivial using an iterative such formula that I have been proved. Also it is also can be produce millions of none trivial zeros. But I am very much voirie about its appearance of its accuracy !!. Are these calculations Is ok?
- 20.08 KB
Relevant answer
In a paper that can be found on arXiv or at , LeClair gives a reasonably accurate algorithm to estimate the non-trivial zeros up to 10^200=Googol^2. My paper that can be found at Cogent Mathematics, on arxiv or RG gives an estimate that bounds the n'th zero and checks LeClairs result for the number Googol. Although both these are not iterative, and work only for non-trivial zeros that sit on the critical line, they are predictive and easily calculated. Once a zero is estimated, or bounded, it's accurate value can then be found from formula given.
Is there a difference between pure and applied mathematics? In Wikipédia, we can find the following definition : Number theory (or arithmetic or higher arithmetic in older usage) is a branch of pure mathematics devoted primarily to the study of the integers. But Number Theory is mostly applided, for example, in modern Data Encryption Techniques (cryptography). So, Is there really a difference between pure and applied mathematics?
Relevant answer
The division of "Mathematics" into two "kinds" is fictitious and misleading. A particular mathematical technique may turn out to be useful in practical situations and, conversely, the application of mathematics to the solution of a practical problem may lead to new mathematical discoveries. In either case, the adjectives "pure" and "applied" are not descriptors of the technique itself, but to its context.
Dear All, I am hoping that someone of you have the First Edition of this book (pdf) Introduction to Real Analysis by Bartle and Sherbert The other editions are already available online. I need the First Edition only. It would be a great help to me! Thank you so much in advance. Sarah
I'm teacher and suffering a lot to complete my MS. I need to write an MS level research thesis. I can work in Decision Making (Preference relations related research work), Artificial Intelligence, Semigroups or Γ-semigroups, Computing, Soft Computing, Soft Sets, MATLAB related project etc. Kindly help me. I would be much grateful to you for this. Thanks.
Relevant answer
The answers to the question for this thread are excellent. There is a bit more to add. Before starting either a M.Sc. or Ph.D. thesis, it is very important to read published theses by others. Here are examples: Source of M.Sc. and Ph.D. theses: Another source of theses:
The function assumes a direct and reverse law. What do we know about the inverse function? Never mind. This is just the shadow of the direct function. Why don't we use the inverse function, as well as direct? ------------- I propose the concept of an unrelated function as extended concept of reverse function. ------------ There is a sum of intervals, on each of which the function is reversible (strictly monotonic) -nondegenerate function. ---------- For any sum of intervals, there is an interval where the function is an irreversible-degenerate function.
- 44.81 KB
Relevant answer
Yes Gokoran since you simply said inverse and not inverse function, I thought you just meant division Inverse functions exist also for functions over complex variables.
Mathematics has been always one of the most active field for researchers but the most attentions has gone to one or few subjects in one time for several years or decades. I'd like to know what are the most active research areas in mathematics today?
Relevant answer
Yes, mathematics has been always not only one of the most active field of the researchers, it was for a long time along with philosophy one of the first sciences. But, it's hard to say what are the most active research areas in mathematics today or what are the most important scientifically explored in mathematics. Less and less support is provided for purely fundamental mathematics is nowadays, and more and more is required to solve specific problems by "someone else" i.e. mathematics turns into a servant of other sciences
Is the canonical unit 2 standard probability simplex, the convex hull of the equilateral triangle in the three dimensional Cartesian plane whose vertices are (1,0,0) , (0,1,0) and (0,0,1) in euclidean coordinates, closed under all and only all convex combinations of probability vector that is the set of all non negative triples/vectors of three real numbers that are non negative and sum to 1, ? Do any unit probability vectors, set of three non negative three numbers at each pt, if conceveid as a probability vector space, go missing; for example <p1=0.3, p2=0.2, p3=0.5>may not be an element of the domain if the probability simplex in barry-centric /probabilty coordinate s a function of p1, p2, p3 . where y denotes p2, and z denotes p3, is not constructed appropriately? and the pi entries of each vector, p1, p2 p3 in <p1, p2,p3> p1+p2+p3=1 pi>=0 in the x,y,z plane where x =m=1/3 for example, denotes the set of probability vectors whose first entry is 1/3 ie < p1=1/3, p2, p3> p2+p3=2/3 p1, p2, p3>=0; p1=1/3 +p2+p3=1? p1=1/3, the coordinates value of all vectors whose first entry is x=p1=m =1/3 ie Does using absolute barry-centric coordinates rule out this possibility? That vector going missing? where <p1=0.3, p2=0.2, p3=0.5> is the vector located at p1, p2 ,p3 in absolute barycentric coordinates. Given that its convex hull, - it is the smallest such set such that inscribed in the equilateral such that any subset of its not closed under all convex combinations of the vertices (I presume that this means all and only triples of non negative pi that sum to 1 are included, and because any subset may not include the vertices etc). so that the there are no vectors with negative entries every go missing in the domain in the , when its traditionally described in three coordinates, as the convex hull of three standard unit vectors (1,0 0) (0,0,1 and (0,1,0), the equilateral triangle in Cartesian coordinates x,y,z in three dimensional euclidean spaces whose vertices are Or can this only be guaranteed by representing in this fashion.
Relevant answer
Of course it is, its what it is by definition. I probably should have thought more about this, back when I originally posted. If it isnt, then nothing is. Its the "closed" convex hull of its vertices, or all points in [0,1]^3, that can be expressed by convex combinations of its vertices:, (1,0,0), (0,0,1), (0,1,0). So that any vector (x,y,z), x+y+z=1 , or rather point, (x,y,z), in [0,1]^3, where x+y+z=1,and x >=0,y,>=0z>=0 can be simply expressed by x* (1,0,0)+(0,1,0)+z*(0,0,1)=(x,y,z), as closed under all convex combinations of the vertices convexity (of this form) simply means/ requires that its contains all points in [0,1]^3, that can be expressed by-negative c1, c2,c3 ; c1*(1,0,0)+c2*(0,1,0) +c3*(0,0,1) where c1+c2+c3 =1, it just is the set of 3 all and only non-negative coordinates, that sum to one . And clearly we can set c1=x, c2=y, and c3=y and these are non-negative and sum to one. So its all, and "only" probably triples (x,y,z),x+y+z=1 . I should have thought this through. The main questions are (1) and (2): (1): does the canonical 2 probability simplex, contain the canonical 2 simplex of the sums at each point x+y, x+z, z+y, clearly at each point (x,y,z) in the canonical 2 probability simplex, x+y,in [0,1] x+z\in [0,1], z+yin [0,1] and their sum (x+y+x+z+z+y=2(x+y+z)=2*1=1, but does it contain at "every" point, every such convex combination of (x+y=l, x+z=g, z+y=h), 1>=h>=0,1>=g>=0, 1>=l>=0 & l+g, g+h, h+l in where h+g+l=2 at some point (x,y,z)? Id say yes, because if there were a (l,g,h), l+g+h=2, 1>=h>=0,1>=g>=0, 1>=l>=0, but with no accompanying (x,y,z) x+y+z=1, (x,y,z)>=0, l= x+y, g=x+z, h=z+y , g=x+h-y, y=x+h-g, so l=x+x+h-g, l=2x+h-: x=1/2*(l-h+g), which is uniquely determined where l-h+g<=l+g+h=2, as h, g, l>=0, and clearly x+y+z=1/2(2x+2y+2x)1/2(l+g+h)=1, the only way that point could not be in the simplex is if, for example x<0 which in this case of l+g<h, as h<=1, g<=1, l<l, as per the conditions above, that we have a contradiction as (l+g)+h=2 where, l+g<h gives (l+g)+(l+g)<(l+g)+h=2, to 2(l+g)<2, L+g<1 but h<=1, so 1+h<=1+1=2, so, 1+h<=2, L+g<1 gives (l+g)+h< 1+h and (l+g)+h<1+h<=2, L+g+h<2, ie l+g+h<1 1/2*(1-g+h)<=1/2*2=1 And moreover, (2): (3)Does it contain all such h+g+l=1, where the l, g, h, are the sums of the first 2, the first and third, second and third coordinates, respectively of 3 "distinct" cartesian points in the simplex, (x1,y1,z1), (x2,y2,z2), and (x3, z3, y3), l=x1+y1, g=x2+z2, h= y3+z3, h+g+l=1, where none of the x1,x2,x3, x2,y2,z2, x3,z3, y3 are 0?. Clearly for any m, in [0,1] there is a point in the simplex <x,y,z>, x=m. So as x is an element of a point, in the simplex, then at those points, where the x coordinate is m, m=x>=0, and clearly x=m<=1. As, 1>=m>=0 entails that 0<=1-m<=1, then (1-m) is in [0,1],. Then as, for any real in [0,1], there are pts in the simplex, where the x coordinate attains that real, then there is a pt, whose x coordinate is x1, where, (the x coordinate), assumes the value, x1=1-x=1-m. <x1,y1,z1>, x1+y1+z1=1 so, y1+z1=1-x1=1-(1-m)=m, so x+z assumes the value m, As the y and z coordinates can assume any value in [0,1] as well, then we have all non negative coordinate of three points, in [0,1]^3:, p1 =(x1,y1,z1), p2=(x2,y2,z2), and p3=(x3, z3, y3), where x1+y2+y3<=3 where x1,y2,y3, are all in [0,1], and thus, the subset, comprising all, sets of three points, p1, p2, p3 in [0,1]^3 where x1+y2+z3=2 , where x1,y2, z3 are non-negative and in [0,1,] are in the simplex. As, at any pt, in the simplex, and at all pts , x+y+z=1, so: (x1+z1+y1)+)x2+y2+z2)+(x3+y3+z3)=3, and as x1+y2+z3=2 , so( x1+y1)+(y2+z2)+(z3+x3)= 3-(x1+y2+z3)= 1, ie for any real in [0,1] x=m at(x,y,z) if and only there is a point (x1,y1,z1), x1+z1= m (ie not on the edges of the simplex, where, aeprt from the vertices, only two of the coordinate of a point are non-zero and not 1 ? At the edges we set y1=0, x1=x1+y1=0+x1= l, x2=0, so as x2+y2+z2=1, x2+z2 in [0,1], so we set z2=g, z2+x2=z2+0=z2=h so, z2+x2=g, and at the third point. Let z3=0, h=y3 so y3+z3=0+y3=y3=h I suppose so, there was a counter-example there wou as if it did not there would have to be no point in the simplex (x,y,z), where 0<=y+z=l=1-g-h<=1 for some, (h,g, l), h+g+l=1 h>=0, g>=0, l>=0, where of course, this gives that 0=1-1<=1-(g+h)=l>=1-0=0 , It also has to express all such combinations because if did not for some l, g, h. So as long as the the set of all probability triples, 2 canonical probability simplex , contain on its edges, (the 2 positive entries,), the set of call of all probability doubles, that is, the "canonical 1 probability simplex/ the set of 2 non-negative points in R^2 whose sum is 1 (all convex combinations of 2 non-negatives that sum to 1, then the it will contain some such three such edge points, meeting the above sum constraintm(x1,y1,z1), (x2,y2,z2), and (x3, z3, y3), l=x1+y1, g=x2+z2, h= y3+z3, h+g+l=1. Where along the edges, the double in the canonical 1 probability simplex, which any triple on the edges of canonical 2 probability simplex, identifies, are those doubles, formed by the, 2 elements, of the 3 elements at that point (x,y,z), which happen to be positive (non-zero). SO, setting the first positive coordinate, in said triple (which will be x, or y) to be the x coordinate of the double, and the second positive entry (which will be z or y) to y, ie (x=1/6, y=0, z=5/6) goes to (x=1/6, y=5/6) which will be unique except at the vertices (where one can use both (1,0) or (0,1), and x+y=1 as the other point z will be zero, as the point was taken from the edge of the set of canonical 2 simplex (x,y, z), where precisely one of (x,y,z), is 0, (except at the vertices, themselves, where 2 of them are of course)
In QFT, computations are done with plane-wave free solutions of the Dirac equations: if one were to consider full solutions of the Dirac equation in interaction with its own electrodynamic field, even without field quantization, what would one obtain? Does anybody know of full solutions, even at a classical level? NOTE: the question is of purely mathematical interest, so I am not interested in reading that we commonly do not do that in standard computations, I would like to know what would happen if considering the problem with mathematical rigour.
Relevant answer
Igor, is it possible that you missed the Volkov solutions? These are used routinely to study matter-field interaction in the strong e.m. field limit.
People usually say that the number greater than any assignable quantity is infinity and probably same in the case of -ve ∞. We are dealing with infinity ∞ in our mathematical or statistical calculations, sometimes we assume, sometimes we come up with it. But whats the physical significance of infinity. Or Anyone with some philosophical comments?
Relevant answer
Mathematicians have a precise definition of infinity that can be used to prove theorems about it. A set S has infinite size if it is possible to create a 1-to-1 correspondence of the elements of S with the elements of a proper subset of S. For example, the positive integers S = {1,2,3,...} is an infinite set because there is a 1-to-1 correspondence between S and the even integers in S: 1 <-> 2, 2 <-> 4, 3 <-> 6, 4 <-> 8, ... Such a 1-to-1 correspondence is impossible for finite sets.
Today, every educational field or domain contains several branches. You first choose one branch of your field to prepare your Master and Ph.D degrees. What is preferable system for you: 1- study the same branch at both Master and Ph.D degrees or 2-study different branch at Ph.D degrees from Master degree And, Why?
Relevant answer
In my opinion, doing PhD in different field but the the idea of Master's is highly applied is good. Rising in academics is as going through ladder, slightly inclined so that the importance of each step is retained.
All of us have a different view of point when we prepare study on some topics. Some of us try to study only the depth results in the work, whereas other think it must to investigate all results that associated to the topic regardless its difficulty or ease. On the other hand, some scholars focus on both. What is your opinion?
Relevant answer
This is an important question. In addition to the excellent answer given by @ Md. Sarfaraz Alam , there is a bit more to add. Quality is the most important feature of good papers. A paper with very high quality will stand the test of time and will be remembered long after it is written. By quality, I mean that precise definitions and detailed examples are given. Appropriate definitions lead to theorems. The need for the highest quality is needed in proofs of theorems. The quality of proofs of theorems is measured in terms of concise but accurate deductions from the definitions, relations and other theorems that precede each proof. Quantity is definitely not an appropriate goal in writing good papers. It is better to let the size of a paper be a function of the need for narrative and examples that illustrate abstract ideas in a paper.
Relevant answer
<geometers don't like groups>: In his influential 'Erlanger Programm' Felix Klein characterized geometries as the theorie of associated group invariants. <algebrists don't like fields>: Galois solved the most urgent algebra problem of his time (general solvability of polynomial equations of order n) by studying properties of fields. So my rudimentary ideas concerning the history of mathemathics suggest just the contrary of what Claude cames up with. Difffuse questions get diffuse answers!
State dependent additivity and state independent additivity? ; akin to more to cauchy additivity versus local kolmorgov additivity/normalization of subjective credence/utility, in a simplex representation of subjective probability or utility ranked by objective probability distinction? Ie in the 2 or more unit simplex (at least three atomic outcomes on each unit probability vector, finitely additive space) where every events is ranked globally within vectors and between distinct vectors by < > and especially '=' i [resume that one is mere representability and the other unique-ness the distinction between the trival (1)x+y+z x,y,z mutually exclusive and exhaustive F(x)+F(y)+F(z)=1 (2)or F(x u y) = F(x)+F(y) xu y ie F(A V B)=F(A)+F(B)=F(A)+ F(B) A, B disjoint disjoint on samevector) (3)F(A)+F(AC)=1 disjoint and mutually excluisve on the same unit vector and more like this or the properties below something more these to these (3a, 3b, 3C) which are uniqueness properties forall x,y events in the simplex (3.A) F(x+y)=F(x)+F(y) cauchy addivity(same vector or probability state, or not) This needs no explaining aritrarily in the simplex of interest (ie whether they are on the same vector or not) or(B) x+y =z+m=F(z)+F(m) (any arbitary two or more events with teh same objective sum must have the same credence sum, same vector or not) disjoint or not (almost jensens equality) or (C)F(1-x-y)+F(x)+F(y)=1 *(any arbitrary three events in the simplex, same vector or not, must to one in credence if they sum to one in objective chance) (D) F(1-x)+F(x)=1 any arbitary two events whose sum is one,in chance must sum to 1 in credence same probability space,/state/vector or not global symmetry (distinct from complement additivity) it applies to non disjoint events on disitnct vectors to the equalities in the rank. 'rank equalities, plus complement addivitity' gives rise to this in a two outcome system, a . It seems to be entailed by global modal cross world rank, so long as there at least three outcome, without use of mixtures, unions or tradeoffs. Iff ones domain is the entire simplex that is adding up function values of sums of evenst on distinct vectors to the value of some other event on some non commutting (arguably) probability vector F(x+y)=F(x)+F(y) In the context of certain probabilistic and/or utility unique-ness theorems,where one takes one objective probability function and tries to show that any other probability function, given ones' constraints, must be the same function.
-
On Handa'sNew Theory of Cardinal Utility and the Maximization of Expected Return fi
shburn.pdf
Relevant answer
and F(x+y)=F(x)+F(y) In the context of certain probabilistic and/or utility unique-ness theorems,where one takes one objective probability function and tries to show that any other probability function, given ones' constraints, must be the same function. what is meant by state dependent additivity does that mean that instead of F(x U y)=F(x)+F(y) x,y disjoint (lie on the same vector) that (same finite probability triple) ; or instead of F(x)+F(y)+F(z)=1 iff x,y,z are elements of the very same vector (same triple) in the simplex one literally instead has that F(x+y)=F(x)+F(y) over the entire domain of the function. ie adding up (arguably non commuting) elements of distinct vectors, or F(x)+F(y)+F(1-x-y)=1 arbitarily over the simplex; or domain of interest, where the only restriction is that one can only add up elements as many times as they are present in the domain. Whilst with cauchy additivity one if one domain is merely a single vector. <1,3, 1,6, 1,2, unit event=1 > one so long 1/6 is in the domain (supposing that entire domain, probability vector space, just is that vector dom(F)={1,3 1,6, 1,2 1), where if F(1)=1 one can arbitrarily add up 1=F(1)=F(1/6+1/6+1/6 ) six times= 6F(1/6), so F(1/6)=1/6. I presume however, that if ones domain is the entire simplex there would not be any relevant difference, between outright Cauchy additivity and state independent additivity; and thus to presume would be outright presumptuous. Or this a name for cross world global rank which entails its long as there at least three atomic events on each simplex (so long as the simplex is well constructed, and the rank is global and modal), even if finite local standard additivity is presumed). As one can transfer values of equiprobable events onto other vectors where they are disjoint? if by state independent addivity this means one can arbitrary add up the function values of F(1/6) for objective probability six times, to F(1)=1 the function value lets say at chance=1 to attain that F(1/6)=6 so long as those events are ranked equal and are present at least six times somewhere or other, even if in distinct state, or vectors (in the same system). or does this apply to local additivity, where one has a global modally transitive rank over the simplex where n>=2 (ie the number of elements in each triple is at least three) because a cross world rank, with equalities, will entail this in any case, if justified. So if one can derive that cross world additivity must hold given finite additivity and global modal rank including equalities cross world/vector, on pain of either either local addivitity failing (probabilism) or ones justified global and local total rank is violated, justified for whatever reason is violated) including equalities must hold (for whatever reason) does this count as presumptious.
Given a1, a2, ..., an positive real numbers, and defined pk= ak\prod { i = 1 to n, i <> k} [(ai)2- (ak)2] how to prove that \sum \frac{1}{pi} is positive? I had an idea of proof, but not sure it would work.... I have the idea written in the attached .png file. EDIT: See the .png file here.
- 238.19 KB
Relevant answer
Thanks Viera answer I've got that the proof for even n is quite sufficient. Thank you Viera for keeping calm while giving interesting answers to interesting questions:) Best regards, Joachim PS. Meanwhile I have realized, that replacing ak 2 by ck and and assuming increasing ordering (without losses, as Viera has noticed) , we are getting for the sum S of the question the following expression with the use of divided difference of order n-1: (n-1)! (-1)n-1 S = (n-1)! [ c1, c2, . . . , cn : f(c) ], which equals the value f(n-1)(b) of the derivative of f of order n-1 at some point b from [c1 , cn ], where in this case f(c) = 1/\sqrt{c}. For the calculus of divided differences and their reprepresentation see e.g. Having this and the sign changes of the derivative of the inverse square root one gets positive value of S, for any choice of positive ck -s . Further conclusion is the this holds for every negative power of c put in place of f(c), and also for every Laplace transform of a positive measure on R+ since then the derivatives of order n have sign equal (-1)n (cf. the William Feller Bible on probability about the completely monotone functions) JoD
In order to get a homogeneous population by inspecting two conditions and filtering the entire population (all possible members) according these two conditions, then used the all remaining filtered members in the research, Is it still population? or it is a sample ( what is called?). working on mathematical equation by adding other part to it then find the solution and applying it on the real world. can we generalize its result to other real world?
Relevant answer
Rula - I am not sure I understand your process, but if your 'sample' is really just a census of a special part of your population, then you can get descriptive statistics on it, but you cannot do inference to the entire population from it. You might find the following instructive and entertaining. I think it is quite good. Ken Brewer's Waksberg Award article: Brewer, K.R.W. (2014), "Three controversies in the history of survey sampling," Survey Methodology, (December 2013/January 2014), Vol 39, No 2, pp. 249-262. Statistics Canada, Catalogue No. 12-001-X. Cheers - Jim
It will be of immense help for me if you can suggest me some papers and books related to the same.
Relevant answer
Dear Alka Munjal, Perhaps you can see Summability Theory And Its Applications Author(s): Feyzi Basar From the cover ''The theory of summability has many uses throughout analysis and applied mathematics. Engineers and physicists working with Fourier series or analytic continuation will also find the concepts of summability theory valuable to their research. The concepts of summability have been extended to the sequences of fuzzy numbers and also to the theorems of ergodic theory. This ebook explains various aspects of summability and demonstrates applications in a coherent manner. The content can readily serve as a useful series of lecture notes on the subject. This ebook comprises of 8 chapters starting from classical sequence spaces and covering matrix transformations and fuzzy numbers. An accompanying bibliography with extensive references makes this a valuable source of information for readers interested in summability theory as well as other branches of science.''
(1) How can we find the partial sum of n1000 instantly ? (2) is there is a simple method to find partial sum of the sequence f(n) ? (3) Any general method to compute partial sum of sequence ? (4) What is the value of Method , if we have good approximation for all differentable sequence ?
Relevant answer
@ Juan Weisz That is more than that . It is a little secret have little secret of analytic continuity and discreetness of integers
In basic numerical analysis, it is shown that Aitken's method improves on the basic iteration method in speed of convergence in the asymptotic sense (see detail below if desired). Now it seems that this should be meaningless in practice, giving no guarantees of 'faster' for any finite number of iterations. I found that this is not only my feeling, but that this concern is echoed in the related Wikipedia article: Although strictly speaking, a limit does not give information about any finite first part of the sequence, this concept is of practical importance in dealing with a sequence of successive approximations for an iterative method, as then typically fewer iterations are needed to yield a useful approximation if the rate of convergence is higher. This may even make the difference between needing ten or a million iterations insignificant. My questions then are 1. Barring empirical evidence, is there ANY formal way of turning the asymptotic result into a result in terms of finite iterations? Even at least probabilistically? Even when conditions are added? This would be an example of what I have in mind: Given a function with condition such and such (smooth, etc.), the convergence is indeed faster in nn iterations with probability p(n)p(n). 2. If there are such results, can you point me to some of them? 3. If there are no such results, should there be no interest in trying to find them? If not why not? 4. Doesn't this state of affairs 'bother' numerical analysts? If not shouldn't it? 5.Do people reading this have their own 'intuitions' about when the speed of convergence holds in practice? What are these intuitions? Why not try to formalise them? Detail When the series {x_n} generated using x_i=f(x_i−1) converges under the usual conditions, Aitken's method, generating the series {x′_n} using x′n=xn−(xn+1−xn)^2 / (xn+2−2xn+1+xn) converges faster in the sense that, with s being the solution f(s)=sf(s)=s, we have (x′n−s) / (xn−s) → 0, n→∞.
Relevant answer
Hi, I think in practice there is a real interest in using this kind of acceleration scheme. I have implemented in the past Richardson scheme (an alternative of Aitken) and the convergence speed was improved in practice. I think the asymptotic methematical result cannot be converted in a probabilistic condition. In order to really measure the benefit of using an acceleration scheme you can perform Monte Carlo runs, changing the initial conditions in ordrer to quantify the acceleration of convergence.
I was wondering if there are any set of n; n>=3continuous or somewhat smooth functions (certain polynomials), all of which have the same domain [0,1] f^n(min,max):[0,1] \to[min,max]; where max>min, min, max\in [0,1] that have these properties 1.\forall v \in [0,1]\sum_n=1\toN(f^n(v)=1; for all values in the domain [0,1], the sum of the 'function values' =1at all points in this domain, and for all possible min and max values of the ranges of the n functions 2. The n functions have range continuously between [min,max], max>=min, both in [0,1]; and will all reach their maximum at some point, but only once (if possible). 3. These functions must reach their global maximum value only at the same point on the domain st,t the other distinct functions attain their global minimum value. These global minima for each fi, may occur at multiple points in the domain [0,1] . 4. There is, for each point in the domain, under which any distinct function fi\neqfj,( fj j\neqi) reaches a global maxima, at least one point (the same point) in the domain s.t that any fi attains its global minima. Correspondingly, each function fi has (or at least) n-1 points (in the domain) under which it attains its global minima, if each of the n functions has a singular point in the domain under which it attains its global maxima (the n-1 points corresponding to the distinct points in the domain under which the n-1 distinct functions, fj\neqfi attain their global maxima . That is, for each, of the (n-1) global maximum values of the range of the other functions, f^i;kneqi, where there are n functions ; the value of the domain in [0,1] when one function reaches its maximum, is the same value of the domain wherein other functions at which they all reach their absolute minimum . W 5. The max values of the range of said functions, f^n= 1-\sum_{i;i\neqn}(min(ranf^i)). Ie the maximum value of the functions, f^i for all i\in{1,n} for n functions, is set by 1- sum (of the min function value/range of the other 'distinct' n-1 functions) 5. It must be such that whenever the minimum values are set so as to sum to one. ie min of any function= max of that same function for all such functions; that all the functions values become a flat line (ie, ie min=max, for such functions) 6.This should be possible for all possible combinations of min values of all 'n' f's (functiions), So each combination of n, non negative values in [0,1] inclusive that sum to one. So one should be able to set the min value of any given value of a function to any value in [0,1], so long as the conjoint sum, sums to one. Likewise presumable if any such function is set such that min=max, all of them are, and these will mins will add to one. It must be be such that these functions should have to change, to make this work.So that for all min, max range values for the n functions and all elements in their domain (which is common to all and is [0,1]) these function values \forall(n, min function values)\forall(v in [0,1])\sumf^i(v) sum to one 6. Moreover and most importantly they must also allow for the non trivial case where the function minimums do not sum to one (ie are not all flat lines); but such that the sum of the values of the function of all such functions sum to one for all values in their common domain ,v=[0,1](which just is their domain, as their domains;f^i:[0,1]->[min,max] are the same). Ie so that the functions will range between a maximum and minimum values (which are not equal) in a continuous and smooth fashion (no gaps, steps or spikes). In this case it must be such that it possible for the the sum of the minimum values, to sum to some positive value smaller then one, although not greater than one, or smaller then zero; for all or 'some' possible combinations (obviously values may not be possible. although, not in virtue of the sum of the functions at some point not being one, but because the max has been set to be smaller then the min for some function; that is the sum of the mins is a particular combination greater then one; (ie 7 function min=0.16....., so that max of fi=1-1=0<0.166=min fi; that is if they sum to more then one; 7. I was wondering if there for any n>3,4,5,6,7,,,,four sets of such sets of 3,4,5,6,7 functions that will do this. And then given this; it must also be such that the function can be modified so that a function only ever has a minimum of 0 if the maximum is 0; without having to set th sum of the mins of the function to 1. And likewise such that the function only ever reaches a max of one if its min is one, unless under certain circumstancess; unless that is, it is possible to do this,for the same reason; without doing this having to set the sum of the mins to one. If indeed one does want some of the functions to range between. If possible, Although, What is most important, is that if any function has a min and max set to one, all of the rest of the functions values sit at zero for all v in [0,1]. It also must be such that if the maximum range value for any given function f1 is larger than that of another function f2 in the set, then f1's minimum value will be larger then that of f2's minimum range value; and conversely.
Relevant answer
I agree peter, I apologize. Nonetheless, However, i must insist,logic entails that one can leave A>T out of the conjunction; you already contradicted yourself (you already said 'can' ,NOT 'must'). It does not entail that one must leave it. Consequentially it does not entail it. What is necessarily possible, is not 'necessarily, necessarily necessary, and that which is entailed or necessary, are only statements which are Necessarily Necessarily Secondly, I never said that I did not originally say it max>=min, when I later said A>B that was a correction. Secondly I repeat, with reference even back to the original statement, max>=Min; that I never said it was NOT Tautologous, despite the fact that I DID SAY THAT IT SOUNDED TAUTOLOGOUS. I did not say this for example, " it only " merely" "sounded tautologous', so in what sense did I contradict what you said? ." I agree, that it is tautologous, and I meant that back then, then just as do now. With relation to what I said originally. The original claim 'It sounds tautologous' does not contradict it being nor being interpreted as saying 'it is tautological' in the strict sense. technically speaking does not contradict the possible interpretation that it is, and that perhaps I meantthat IT IS TAUTOLOGOUS. In the the literal sense, at least. It only contradicts that claim if you apply an interpretation involving common sense and the maxim of pleasant communication. Nonetheless I apologize i wont be like this in the future. I am just pointing out again and again, that back then, nor now, did I ever say, that I was not tautologous. Common sense, appears to give that implication though so I wont do that in the future
Are there sets of three, or rather for any N>3 sets of N sur-jective uniformly continuous functions, for all N>3, where n denotes the number of function in the sets, such that each function in the set has the same domain [0,1] and the same range [0,1],such that these functions, in a given set, sum to one on all points in the domain [0,1]. ie \forallv\in[0,1] [\sum_{i=1}^{i=1n)fi(v)} =1 in [0,1] .Moro-ever, Are there arbitrarily many such functions for all n>=3. By non trivial, I mean, -non linear, (and presumably not quadratic) functions which just so happen to be that that their sum give value 1, in [0,1] or perhaps for any domain, and not by way of the algebraic sum cancelling out to a constant 1 as in the case of x, 1-x (ie error correcting) .Presumably due to the nature of the derivatives That is, so that the functions, would sum to one, regardless of the domain [0,1] or least for if the domain [0,1] is held fixed, and the functions are weighted; without it being a mere artefact that the algebraic sum to cancels out to give a constant,1,for any x; that is error-correcting functions. That is three (or n>3) functions all of which have a maximum range value of one, miniimum range value of zero, and which sum to one for all points in the domain [0,1]; and which are surjective and uniformly continuous, ie for every value in the range ri\in [0,1], these functions have some (at least one) value, ci in the domain [0,1] such that f(ci)=ri such that takes on that . Where these maximum values and min values for all fi coincide (the same element of the domain for which fi corresponds to 1, is such that other n-1/2 functions correspond to zero etc); where obviously these three/n max points such that fi(c)=1,(one, and only one for each function, fi,i1<=i<=n for n>=3, such functions) are distinct (correspond to distinct elements of the domain), although they correspond to the same element of the domain for for which each of the other 2/n-1 are at their minimum value (ie 0) So for n=3 there is one one max point for each function, and at least , and presumably only, two/n-1 min points), s.t, when one functions fi(c) hits 1, fi(c)=1,the other 2/n-1 functions fj(c),jneqi take the value 0, fj(c)=0 for all j.And are there at least two such such sets for all sets of N>=3 such functions. So there are three distinct domain points in [0,1] corresponding to a n/3-tuple of function values,f1,f2,f3(c)= <1,0,0>,,f1,f2,f3(c1)=<0,1,0>f1,f2,f3(c2)=,<0,0,1>; corresponding the three distinct elements/points in the domain [0,1] , c\neqc1\neqc2, c,c1,c2,\in [0,1] . Where the first function is at its maximum value here 1, and the other two are their minimum, zero at c, the second is at its maximum at c1 (1), and first and third are a their mimimum (zero here) at c1 etc And are continuous (no gaps, and uniformly continuous, ie no spikes). As one could not make use of these such functions, if one wanted them to be weighted otherwise; if said functions have sums which either (A) as just cancel out to be constant, or two (are such that it their sums are do not cancel out to a constant, but just so happen to line up because the domain is [0,1] Likwise, the functions a similar form; so that one does not want, one having two maximums whilst the other two for example have one maxima, and two minima. Perhaps Berstein polynomials could be so weighted, but I do not know; the linear forms cancel out but their weighted bezauir forms seem to be a little unstable from what I have read.
Relevant answer
I passed through A set of uniformly continuous surjective functions f1,...,fn from [0,1] to [0,1] is enclosed. In particular, for n=3: f1(x) = cos2(2pi.x).cos2(4pi.x), f2(x) = cos2(2pi.x).sin2(4pi.x), f3(x) = sin2(2pi.x). The construction shows that many other convenient sets of functions exist.
- 20.47 KB
Dear professors: Good afternoon. I am researching about teaching triangle inequality. Are there papers about theorems´production (or formulation) by secondary level's students? Which theoretical framework) in Mathematics Education) may be suitable to study conditions to construct triangle with three segments? Best regards of Peru! Luis
Relevant answer
This is an excellent question with many possible answers. In addition to the helpful answers already given, there is a bit more to add. A good paper on the straightforward formulation of a theorem and its proof (useful for those starting out in getting comfortable with theorems and their proofs) is given in The beauty of this paper on a simple proof of a well-known algebraic curves theorem is its piecewise approach with the use of a number of lemmas. For related papers, see also: For an introduction to theorem proving paradigms, see A new proof of the Pythagorean theorem is given in An elementary proof of the converse of the mean value theorem is given in A very good discussion of the discovery (and invention) of theorems is given in
It is possible to write a set of quaternionic partial differential equations that are similar to Maxwell equations. For example: The quaternionic nabla ∇ acts like a multiplying operator. The (partial) differential ∇ ψ represents the full first order change of field ψ. ϕ = ∇ ψ = ϕᵣ + 𝟇 = (∇ᵣ + 𝞩 ) (ψᵣ + 𝟁) = ∇ᵣ ψᵣ − ⟨𝞩,𝟁⟩ + ∇ᵣ 𝟁 + 𝞩 ψᵣ ±𝞩 × 𝟁 The terms at the right side show the components that constitute the full first order change. They represent subfields of field ϕ and often they get special names and symbols. 𝞩 ψᵣ is the gradient of ψᵣ ⟨𝞩,𝟁⟩ is the divergence of 𝟁. 𝞩 × 𝟁 is the curl of 𝟁 The equation is a quaternionic first order partial differential equation. ϕᵣ = ∇ᵣ ψᵣ − ⟨𝞩,𝟁⟩ (This is not part of Maxwell equations!) 𝟇 = ∇ᵣ 𝟁 + 𝞩 ψᵣ ±𝞩 × 𝟁 𝜠 = −∇ᵣ 𝟁 − 𝞩 ψᵣ 𝜝 = 𝞩 × 𝟁 From the above formulas follows that the Maxwell equations do not form a complete set. Physicists use gauge equations to make Maxwell equations more complete. χ = ∇* ∇ ψ = (∇ᵣ − 𝞩 )(∇ᵣ + 𝞩 ) (ψᵣ + 𝟁) = (∇ᵣ ∇ᵣ + ⟨𝞩,𝞩⟩) ψ and ζ = (∇ᵣ ∇ᵣ − ⟨𝞩,𝞩⟩) ψ are quaternionic second order partial differential equations. χ = ∇* ϕ and ϕ = ∇ ψ split the first second order partial differential equation into two first order partial differential equations. The other second order partial differential equation cannot be split into two quaternionic first order partial differential equations. This equation offers waves as parts of its set of solution. For that reason it is also called a wave equation. In odd numbers of participating dimensions both second order partial differential equations offer shape keeping fronts as part of its set of solutions. After integration over a sufficient period the spherical shape keeping front results in the Green's function of the field under spherical conditions. 𝔔 = (∇ᵣ ∇ᵣ − ⟨𝞩,𝞩⟩) is equivalent to d'Alembert's operator. ⊡ = ∇* ∇ = ∇ ∇* = (∇ᵣ ∇ᵣ + ⟨𝞩,𝞩⟩ describes the variance of the subject Maxwell equations must be extended by gauge equations in order to derive the second order partial wave equation. Maxwell equations use coordinate time, where quaternionic differential equations use proper time. In terms of quaternions the norm of the quaternion plays the role of coordinate time. These time values are not used in their absolute versions. Thus, only time intervals are used. The quaternionic nabla obeys some other pure mathematical relations: ⟨𝞩 × 𝞩, 𝟁⟩=0 𝞩 × (𝞩 × 𝟁) = 𝞩⟨𝞩,𝟁 ⟩ − ⟨𝞩,𝞩⟩ 𝟁 (𝞩𝞩) ψ = (𝞩 × 𝞩) ψ − ⟨𝞩,𝞩⟩ ψ = (𝞩 × 𝞩) 𝟁 − ⟨𝞩,𝞩⟩ ψ = 𝞩⟨𝞩,𝟁 ⟩ − 2 ⟨𝞩,𝞩⟩ ψ + ⟨𝞩,𝞩⟩ ψᵣ The term (𝞩 × 𝞩) ψ indicates the curvature of field ψ. The term ⟨𝞩,𝞩⟩ ψ indicates the stress of the field ψ. (𝞩 × 𝞩) ψ + ⟨𝞩,𝞩⟩ ψ = 𝞩⟨𝞩,𝟁 ⟩ − ⟨𝞩,𝞩⟩ ψᵣ Einstein equations for general relativity use the curvature tensor and the stress tensor. Above is shown that some terms of the partial differential equations relate to terms in Einstein's equations. The advantage of writing equations with nabla based operators instead of with the help of tensors is that these PDE's are more compact and therefore easier comprehensible. The disadvantage is that the quaternionic PDE's enforce you to work in an Euclidean space-progression structure instead of in a spacetime structure that has a Minkowski signature. Personally I consider the Euclidean structure as an advantage, but the Minkowski signature is more in concordance with mainstream physics.
Relevant answer
Stefano, I undertake the conversion enterprise because for me the PDE's are better comprehensible than tensor equations. For similar reasons I use quaternions instead of Clifford algebras.
I have triangle mesh and calculate normal of triangles then calculate vertex normal and do some calculations on it and want to calculate vertex coordinates from this vertex normal after do calculations.
Relevant answer
Look at this doc it may be helpful for your topic. Good luck.
In this figure called concentric circles, the circumference on A less than circumference on B. and so on. i.e, The distance between neighbor circles are the same, s. circA < circB < circC < circD < circE When XA finishes moving round the circumference A, it moves (transits) to the next circle, B, to help XB complete moving round the circumference. When XA and XB complete the movement round the circumference B, they now make a transition to circumference C and help XC complete the movement round the circumference C. After the completion, they also transit to join XC on circumference C and so on. The question is this. How can this be presented mathematically (arithmetically)?
- 93.18 KB
Relevant answer
Your statement of the problema is not clear. One can interpret it in two ways: If you have only one person (or object) moving and n circles, the answer is: D(n) = n(n+1)pi + (n-1)s. If you have n persons moving (one on each circle), the answer is: D(n) = (ns/2){[(n+1)(2n+1)pi/3] + n-1}. I got this by doing D(n) = 2pi.s (sum of i squared from 1 to n) + s .(sum of i from 1 to n-1).
I am interested in prime number generation. Apart from the 2p-1 formula generated so many years ago by the French mathematician are there known formula for determining the next prime?
Relevant answer
You are right. This is why I wrote I will come back. The general formula are : q^p-(q-1) and q^p-(q-2) till q^p -1 It opens a new field in the study of prime numbers and allows generalization of some theorems.
Suppose $u(n)$ is the Lie algebra of the unitary group $U(n)$, why the dual vector space of $u(n)$ can be identified with $\sqrt{-1}u(n)$?
Relevant answer
Hi Pan, $u(n)$ is a real Lie algebra, and in particular a real vector space. Using a non-degenerate symmetric bilinear form on $u(n)$, you can identify $u(n)$ with its dual vector space. The $\sqrt{-1}$ is not that important in a sense, and probably comes from using a non-degenerate pairing between skew-hermitian and hermitian matrices (instead of a non-degenerate symmetric bilinear form). $u(n)$ is the space of skew-hermitian $n$ by $n$ matrices, and $\sqrt{-1} u(n)$ is the space of hermitian $n$ by $n$ matrices by the way.
Any mathematical expert can see my attachment I have highlighted few mathematical symbols , what that symbol signifies how to understand that can anyone tell
- 74.16 KB
Relevant answer
Naveen, you probably did not have good teachers who explain basic ideas of hydrodynamics "on fingers". Imagine water over rigid bottom. If it is incompressible, Laplace equation is valid in every internal point. \psi is velocity potential, and \eta is deviation of free surface from equilibrium. Due to non-compressivility, the total volume of water is preserved, and thus an integral of deviation function over unperturbed surface is zero. The condition on bottom shows that the velocity is locally parallel to the bottom (given by differentiable function); its normal component is zero. To understand notations. Recall that a symbol resembling Euro symbol means: element belongs to a set. R is a set of real numbers, while other sets in your case are some subsets of R1(real line) or R2 (plane). You can look at equations (1) in my attached article to see what is what and what one can do with that: https://www.researchgate.net/publication/275581998_Evolution_of_long_nonlinear_waves_on_shelves
As we know, an elliptic curve defined over Fq with a rational 2-torsion subgroup can be expressed in the special form (up to twists). Accordingly a natural question arises about the number of distinct (up to isomorphism) elliptic curves over Fq in the family.
Let HT denotes the statement of Hindman's theorem. Within RCA0 one can prove that: 1. HT implies ACA0 2. HT can be proved in ACA0 +. An open question is the strength of Hindman's theorem. Is HT equivalent to ACA0 + , or to ACA0, or does it lie strictly between them?
Relevant answer
This is a very good question. A good place to start in answering this question is Unfortnately, this paper is not available on RG but it is available a University of Connecticut web page at See also where the issue of the strength of Hindman's theorem is raised.
Let f(x)+g(x) = h(x). Here, h(x) is minimum at the points(a1,a2,...,ak). For which condition , we can say that f(x) is also minimum at the points(a1,a2,...,ak)? Thanks in advance for your idea and please give any reference .
Relevant answer
My answer from above gives a sufficient condition (on g) for the desired conclusion on f, and it is independent on any differentiability hypothesis. It follows by simple handling inequalities. Hence this implication holds and one can say something on this subject.
Let $p(.)$ be an equivalent norm to the usual norm on $\ell_1$ such that $$\limsup\limits_{n\to\infty} p(x_n+x)=\limsup\limits_{n\to\infty}p(x_n)+p(x)$$ for every $w^*-$null sequence $(x_n)$ and for all $x\in\ell_1,$ moreover, let $$\rho_{k}(x)=p(x)+\lambda\gamma_{k}\sum\limits_{n=k}^{\infty}|x_n|,$$ where, $(\gamma_{k})$ be any non-decreasing sequence in $(0,1)$ and $\lambda >0$. I'd like to prove for every $w^*-$null sequence $(x_n)$ and for all $x\in\ell_1,$ $\limsup\limits_{n\to\infty}\rho_k(x_n+x)=\limsup\limits_{n\to\infty}\rho_k(x_n)+\rho(x)$ . **My attempt is the following** \begin{align} \limsup\limits_{n\to\infty}\rho_k(x_n+x) =\limsup\limits_{n\to\infty} p(x_n+x) +\limsup\limits_{n\to\infty}\lambda\gamma_{k}\sum\limits_{n=k}^{\infty}|x_n+x| \\ =\limsup\limits_{n\to\infty}p(x_n)+p(x) +\limsup\limits_{n\to\infty}\lambda\gamma_{k}\sum\limits_{n=k}^{\infty}|x_n+x|\\ \end{align} Now I could not proceed to prove, any ideas or hints would be greatly appreciated. Thanks in advance
Relevant answer
First of all, I suppose that you consider $\ell_1$ as dual to $c_0$, so by $w^*-$null sequence in $\ell_1$ you mean such a sequence $x_n$ that is norm-bounded and goes to zero coordinate-wise. In your question you are using letter $x_n$ in two different senses: as elements of $\ell_1$ and as coordinates of an element $x$. This leads to a confusion. If you denote coordinates of $x$ as $x_n$ and consider vectors $y_n = (y_{n,1}, y_{n,2}, \ldots)$, then the formula you are going to demonstrate is $\limsup\limits_{n\to\infty}\rho_k(y_n+x)=\limsup\limits_{n\to\infty}\rho_k(y_n)+\rho(x)$ The hint to this exercise is: if a sequence of vectors $y_n \in \ell_1$ is norm-bounded, goes to zero coordinate-wise and if there exists the limit $\lim_{n \to \infty} \|y_n\|$, then there exists $\lim_{n \to \infty} \|x +y_n\|$, and $$\lim_{n \to \infty} \|x +y_n\| = \lim_{n \to \infty} \sum_{j=1}^\infty |x_j + y_{n,j}| = \|x\| + \lim_{n \to \infty} \|y_n\|.$$
Relevant answer
Yes. Pure mathematics is useful for theoretical physics. Mathematics is nothing but logical expression of physics and physical things. It is much more clear than physical logic if the methodology is right. A physical concept has to be converted to mathematical equation and the mathematics will drag it to a final equation which can explain a new concept which is not visible to the conceptual logic. Conceptual logic must provide a physical phenomena which can be observed and verified physically. Otherwise that mathematics will create complicated and chaotic outputs in theoretical physics..
Every natural number n can be written as: n= a_0 + a_1 *(10)^1 + a_2 * (10)^2 +... a_i between 0 and 9. how can we generate a new method to find the divisors of n apart from the well known method of prime factorization. If so, we can provide a new method to calculate the sum of divisors function.
Relevant answer
Dear Hanifa; Could you please write down an example
In some cases, learners find it easy to deal with decimal fractions than proper and improper fractions. Looking at the complex formation of fractions when adding or subtracting seems harder and almost impossible. e.g. 0.5 + 3.3 = 3.8 1/2 + 33/10 = 38/10
Relevant answer
Since children had an intensive experience with numbers, hence from my experience working with children in the primary level, adding decimal numbers is not a difficult concept for them to comprehend. The challenge is just to help children extend their understand regarding place value of, one tenth, one hundreth etc... However, fraction is a relatively new concept for children to grasp. Hence, I feel that operations with fractions should be taught later, once their are comfortable with the concept of fractions itself.
A mathematical colleague and I are working on an article which uses the pure mathematical analysis for equilibrium in equity crowdfunding. We inspired from the model of consumer-product (brand) preference to build up a mathematical model for investor-project preference on a crowdfunding platform. A common point in consumer-product and investor-project relations is that an agent has to choose among different options with optimal efficiency. We presented a first draft recently at a conference.Non mathematician researchers had difficulty to follow and understand our paper.How can we make such a mathematical reasoning more understandable for non-mathematicians? Do you any article as model? What is your adive?
By giving evidence or reference, Who first discovered the base of the natural logarithm: e?
Relevant answer
Does anyone know something about first sciences, especially mathematics. 1.My question is: when and which science have emerged as first ? 2. The place and role of mathematics among the oldest sciences. I will start with a few remarks about mathematics: 1. Already its paradigmatic place in the domain of human knowledge, independent of all other valid reasons, mathematics deserves a special place. 2. The oldest known thinkers of antique civilization have beencharacteristic way of mathematical form of knowledge,and since then it deserves as a model of scientific value and measures of exactness of the overall knowledge. 3. Already in the middle ages, mathematics in his former division accounted for two of theseven skills which was dedicated to the study of the traditional University (geometry and arithmetic) in quadriviumu. And the third one-seventh, logic, the trivium of today would be the relevant part, in the form of mathematical logic, also regarded as one of the domains of mathematics.
A well-known result of B.M. Levitan and T.V. Avadhani asserts that the Riesz-summability of order k of the eigenfunction expansion f (P) of f (P) from L2 (D) at the point P =Po from D depend only on the behaviour of f (P) in the neighbourhood of Po if k > (n-1)/2, i. e. is a local property of f(P) at the considered point Po if k > (n-1)/2. Is it possible to prove (applying Parseval's formula) the analogue of Avadhani's theorem for Avakumovic's G - method of summability. A crucial step in the proof of this theorem is to find a function g that lead us to the core of Avakumovic's summability which is more complex than the core of Riesz's summability.
Relevant answer
Dear Cenap Özel, If you have a problem with russian presentaton of my work "About one application Parseval's formula to the Avakumović's G - method of summability of eigenfunction expansion" I want inform you that it soon publish in Sarajevo Journal of Mathematics in number dedicated to the memory of my professor Academician Mahmud Bajraktarevic, Sincerely, Mirjana Vukovic
Since it is difficult to write mathematical formulae please consider the attached file.
- 20.33 KB
Relevant answer
${F_{{5^k}n}}(q) \equiv 0\bmod {[{5^k}]_q}$ is equivalent with ${F_{{5^k}}}(q) \equiv 0\bmod {[{5^k}]_q}$ because ${q^{{5^k} + n}} \equiv {q^{{5^k}}}\bmod {[{5^k}]_q}$ and therefore ${F_{{5^k}(n + 1)}}(q) \equiv {F_{{5^k}n}}(q){F_{{5^k}n + 1}}(q) \equiv 0\bmod {[{5^k}]_q}.$ Therefore it suffices to show that ${F_{{5^k}}}(q) \equiv 0\bmod {[{5^k}]_q}.$
We define a factoriangular number (Ftn) as the sum of a factorial and its corresponding triangular number, that is, Ftn = n! + n(n+1) / 2. If both n and m are natural numbers greater than or equal to 4, is there an Ftn that is a divisor of Ftm? Please also see the article provided in the link below, specifically Conjecture 2 on pp. 8-9.
Relevant answer
Let m=Ftn if Ftn is odd, and m=2Ftn if Ftn is even. Then Ftn divides Ftm.
My function is nonlinear with respect to a scalar \alpha . However, the calculation of objective function is very time consuming, making optimization also very time consuming. Also, I have to do it for 1/2 millon voxels (3d equivalent of pixels). I plan to do it using "lsqnonlin" of matlab. Rather than optimizing over all possible real values, I plan to search over preselected 60 values. My variable \alpha (or flip angle error) could be anything between 0-35%; but, I want to pass only linearly spaced points as candidates (i.e. 0:005:0.35). In other words, I want lsqnonlin to choose possible solution only from (0:005:0.35). Since I can pre-calculate objective values for these, it would be very fast. In other words, I need to restrict search space. Here, I am talking about single voxel; though I performs lsqnonlin over multivoxel and corresponding \alpha is mapped accordingly to a column vector. I can not do grid search over preselected value as I plan to perform spatial smoothing in 3D. Some guidance would be highly appreciated. Regards, Dushyant
Relevant answer
My two cents to the question: 1) What you want to do is the discrete optimization (see the link). The discrete coded (binary) genetic algorithm, e.g., could do the trick. 2) You could perform the factorial design of experiments (factorial d.o.e.) with a number of levels for each parameter you want, but the fraction should be very small to calculate it (see below, why). 1/2 millon voxels as optimization variables is somewhat difficult to optimize, even assuming varying them on 60 levels. It is probably possible to run on supercomputer, but first consider if the problem is formulated correctly and the results would be significant, if it is worth trying.
For example, {2}, {3,5,7}, {11,13}, {17,19}, etc. `A second interested question would be that if any such patterns terminate at any level then "Does the cardinality of such sets follow any pattern?"
Relevant answer
Dear U. Dreher, about the conjecture on the 'twin numbers' let me inform you that I proved this conjecture as a particular case of the 'de Polignac's conjecture'. You can find solutions of all the Landau problems in the following my papers: These solutions could answer also to the Shahid's question. My best regards Agostino,
It is believed that there are the bijection relationships between Infinite Natural Number Set and Infinite Rational Number Set, but following simple story tells us that Infinite Rational Number Set has far more elements than that of Infinite Natural Number Set: The elements of a tiny portion of rational numbers from Infinite Rational Number Set (the sub set : 0, 1, 1/2, 1/3, 1/4, 1/5, 1/6, …, 1/n …) map and use up (bijective) all the numbers in Infinite Natural Number Set (0,1, 2, 3, 4, 5, 6, …, n …); so,infinite rational numbers (at least 2,3,4,5,6,…n,…) from Infinite Rational Number Set are left in the "one—to—one element mapping between Infinite Rational Number Set and Infinite Natural Number Set (not the integer set )------- Infinite Rational Number Set has infinite more elements than Infinite Natural Number Set. This is the truth of a one-to-one corresponding operation and its result between two infinite sets: Infinite Rational Number Set and Infinite Natural Number Set. This is the business just between the elements' quantity of two infinite sets and it can be nothing to do with the term of "proper subset, CARDINAL NUMBER, DENUMERABLE or INDENUMERABLE". Can we have many different bijection operations (proofs) with different one-to-one corresponding results between two infinite sets? If we can, what operation and conclusion should people choose in front of two opposite results, why?" Such a question needs to be thought deeply: there are indeed all kinds of different infinite sets in mathematics, but what on earth make infinite sets different? There is only one answer: unique elements contained in different infinite sets -------the characteristics of their special properties, special conditions of existence, special forms, special relationships as well as very special quantitative meaning! However, studies have shown that, due to the lack of the whole "carriers' theory" in the foundation of present classical infinite theory, it is impossible for mathematicians to study and cognize those unique characteristics of elements operationally and theoretically in present classical set theory. So, it is impossible to carry out effectively the quantitative cognitions to the elements in various different infinite set scientifically -------a newly constructed Quantum Mathematics. The article《On the Quantitative Cognitions to "Infinite Things" (IX) ------- "The Infinite Carrier Gene", "The Infinite Carrier Measure" And "Quantum Mathematics"》 has been up loaded onto RG introducing the working ideas. https://www.researchgate.net/publication/344722827_On_the_Quantitative_Cognitions_to_Infinite_Things_IX_---------_The_Infinite_Carrier_Gene_The_Infinite_Carrier_Measure_And_Quantum_Mathematics
Relevant answer
Dear Geng Lets use your own reasoning in a different way: Just take a tiny portion of the Natural Number Set (2,4,6, ... ,2n, ...) and they map very well on the set of all Natural Numbers (1,2,3,...,n,...). So a lot of natural numbers are left behind in this one -to-one mapping (2n onto n) from the Natural Numbers onto the Natural Numbers, Hence your conclusion should be: the Natural Number Set has far more elements than the Natural Number set. Think about this. Best regards, Joseph.
Dear RG friends: In two weeks time, I am all set to conduct a technical session on Analysis. I plan to deliver a long lecture on "Fixed Point Theorems". Of course, Banach fixed point theorem is useful to establish the local existence and uniqueness of solutions of ODEs, and contraction mapping ideas are also useful to develop some simple numerical methods for solving nonlinear equations. Are there any other interesting science / engineering applications? Kindly let me know! Thank you for the kind help. With best wishes, Sundar
Relevant answer
As applications : logic programming and fuzzy logic programming Nash equilibrium and game theory.
Let q be an odd positive integer, and let Nq denote the number of integers a such that 0 < a < q/4 and gcd(a, q) = 1. How do I see that Nq is odd if and only if q is of the form pk with k a positive integer and p a prime congruent to 5 or 7 modulo 8?
Relevant answer
I see that you, too, follow the Putnam Exam.
As the title suggests, how do i see that for any n, the covering map S^{2n} → RP^{2n} induces 0 in integral homology and cohomology, except in dimension 0?
Relevant answer
Maybe you would like to see geometrically how the n-th homology of S^n can be killed when projected to RP^n? Take a triangulation of the n-sphere which is invariant under the antipodal map -I , e.g. the generalized octahedron (which exists in every dimension). The projection is a triangulation of the projective space RP^n, and each of its simplices is hit twice by the projection map S^n \to RP^n. However, when n is even, the antipodal map -I on R^{n+1} reverses orientation (determinant -1) but preserves the exterior normal vector field of S^n, thus it reverses the orientation on S^n. Hence each simplex contributes with both signs and you get extinction, while in odd dimensions you obtain a factor 2 (the mapping degree of the projection map). Best regards Jost
Say a definition to be self-referential provided that contains either an occurrence of the defined object or a set containing it. For instance, Example 1) n := (n∈ℕ)⋀(n = n⁴)⋀(n > 0) This is a definition for the positive integer 1, and it is self-referential because contains occurrences of the defined object denoted by n. Example 2) Def := "The member of ℕ which is the smaller odd prime." Def is a self-referential definition, because contains an occurrence of the set ℕ containing the defined object. Now, let us consider the following definition. Def := "The set K of all non-self-referential definitions." If Def is not a self-referential definition, then belongs to K, hence it is self-referential. By contrast, if Def is self-referential does not belong to K, therefore it is non-self-referential. Can you solve this paradox? Take into account that non-self-referential definitions are widely used in math.
Relevant answer
Juan-Esteban, That's a nice way to avoid the so-called Russell paradox at any finite stage of the process. It feels as if you avoid the paradox by rejecting infinity. If you read my post carefully, you will see that the problem is solved in a more fundamental way with or without considerations of infinite sets. Formal logic tells you that there cannot be x such that (all y) ( not P(y,y) <--> P(y,x) ), whatever you mean by P(y,x) and whatever your universe of discourse may be. If you take the "barber definition" (anyone shaving all those people not shaving themselves), even in an imaginary infinite society of humans, it turns out that there is no such barber. In fact, it is not a definition because it deals with nothing. Let me illustrate this with a more obvious contradiction: Remarkable_Weather := weather with rain and yet without rain. Such a "definition" is verbosity with (literally) no subject, hence with no meaning. You said:A paradox either leads to a contradiction or it is not a paradox. With your quoted statement at hand, my "claim" that there is Remarkable_Weather deserves being called a paradox (it leads to a contradiction, even stronger:it is a contradiction). It's too cheap, don't you think? I prefer Webster's definition of "paradox": A tenet or proposition contrary to received opinion; an assertion or sentiment seemingly contradictory, or opposed to common sense; that which in appearance or terms is absurd, but yet may be true in fact. The only difference between having a Remarkable_Weather and the Russell paradox is, that the latter involves a slightly more hidden logical contradiction.
Two numbers a and b are elements of the set of real numbers exclusive of the set of rational numbers; they are irrational. Are there cases where a times b, or where a divided by b, yield a member of the set of integers? How rare or commonplace is the condition of irrational number products yielding rational values?
Relevant answer
If a is irrational and n is an integer, then b:=n/a is irrational (!) and ab =n.
I was working on 2 papers on statistics when I recalled a study I'd read some time ago: "On 'Rethinking Rigor in Calculus...,' or Why We Don't Do Calculus on the Rational Numbers'". The answer is obviously trivial, and the paper was really in response to another suggesting that we eliminate certain theorems and their proofs from elementary collegiate calculus courses. But I started to wonder (initially just as a thought exercise) whether one could "do calculus" on the rationals and if so could the benefits outweigh the restrictions? Measure theory already allows us to construct countably infinite sample spaces. However, many researchers who regularly use statistics haven't even taken undergraduate probability courses, let alone courses on or that include rigorous probability. Also, even students like engineers who take several calculus courses frequently don't really understand the real number line because they've never taken a course in real analysis. The rationals are the only set we learn about early on that have so many of the properties the reals do, and in particular that of infinite density. So, for example, textbook examples of why integration isn't appropriate for pdfs of countably infinite sets typically use examples like the binomial or Bernoulli distributions, but such examples are clearly discrete. Other objections to defining the rationals to be continuous include: 1) The irrational numbers were discovered over 2,000 years ago and the attempts to make calculus rigorous since have (almost) always taken as desirable the inclusion of numbers like pi or sqrt(2). Yet we know from measure theory that the line between distinct and continuous can be fuzzy and that we can construct abstract probability spaces that handle both countable and uncountable sets. 2) We already have a perfectly good way to deal with countably infinite sets using measure theory (not to mention both discrete calculus and discretized calculus). But the majority of those who regularly use statistics and therefore probability aren't familiar with measure theory. The third and most important reason is actually the question I'm asking: nobody has bothered to rigorously define the rationals to be continuous to allow a more limited application of differential and integral calculi because there are so many applications which require the reals and (as noted) we already have superior ways for dealing with any arbitrary set. Yet most of the reasons we can't e.g., integrate over the rationals in the interval [0,1] have to do with the intuitive notion that it contains "gaps" where we know irrational numbers exist even though the rationals are infinitely dense. It is, in fact, possible to construct functions that are continuous on the rationals and discontinuous on the reals. Moreover, we frequently use statistical methods that assume continuity even though the outcomes can't ever be irrational-valued. Further, the Riemann integral is defined in elementary calculus and often elsewhere as an integer-valued and thus a countable set of summed "terms" (i.e., a function that is Riemann integrable over the interval [a,b] is integrated by a summation from i=1 to infinity of f(x* I)Δx, but whatever values the function may take, by definition the terms/partitions are ordered by integer multiples of i). As for the gaps, work since Cantor in particular (e.g., the Cantor set) have demonstrated how the rationals "fill" the entire unit interval such that one can e.g., recursively remove infinitely many thirds from it equal to 1 yet be left with infinitely many remaining numbers. In addition to objections mostly from philosophers that even the reals are continuous, we know the real number line has "gaps" in some sense anyway; how many "gaps" depends on whether or not one thinks that in addition to sqrt(-1) the number line should include hyperreals or other extensions of R1. Finally, in practice (or at least application) we never deal with real numbers anyway (we can only approximate their values). Another potential use is educational: students who take calculus (including multivariable calculus and differential equations) never gain an appreciable understanding of the reals because they never take courses in which these are constructed. Initial use of derivatives and integrals defined on the rationals and then the reals would at least make clear that there are extremely nuanced, conceptually difficult properties of the reals even if these were never elucidated. However, I've been sick recently and my head has been in a perpetual fog from cold medicines, so the time I have available to answer my own question is temporarily too short. I start thinking about e.g., the relevance of the differences between uncountable and countable sets, compact spaces and topological considerations, or that were we to assume there are no "gaps" where real numbers would be we'd encounter issues with e.g., least upper bounds, but I can't think clearly and I get nowhere: the medication induced fog won't clear. So I am trying to take the lazy, cowardly way out and ask somebody else to do my thinking for me rather than wait until I am not taking cough suppressants and similar meds.
Relevant answer
David: We actually do integrate over the rational numbers. Probably the most essential integration formula is that of the integral of x^n over the interval [0,1]. The value of this can be established entirely over the rationals. You can have a look at my Famous Math Problems10 video at my channel (user njwildberger). There are some of us that don't believe in the infinite-precision dream which supports the `real numbers'. If you are interested in why, my recent seminar: `A Socratic look at logical weaknesses in modern pure mathematics' gives some reasons. Also at my YouTube channel.
I have come out with my own equation(I have no idea whether it is new) for pie : π = √2/2 x n x √((1-cos(dΘ)), where n is the number of triangles in the circle, and dΘ is the angle of the triangle that is -->0. Ok, now, say, I put n = 1440, so dΘ will be 360/1440 = 0.25, and put it into the equation, i will get π= 3.141590118…; if I put n=2880, so dΘ will be 0.125, putting it into the equation, I will get π=3.141591603.., if I put n=5760, so dΘ will be 0.0625, putting it into the equation, i will get π= 3.141592923..., we know π= 3.141592654..,but i can never really get the n and dΘ to give that answer. Anyone can come out with some good idea??
Relevant answer
Dear Mason, I dont know. I took this from you. Best Regards, Henri.
If a polynomial P(z) of degree n omits w in |z|<1, show that P(z)+(1-e^{ih})zP'(z)/n also omits w in |z|<1 for every real h. I know at least two proofs of this result, one follows by using Laguerre's theorem concerning the polar derivative of a polynomial. I want to find the direct proof of this result with out using any known Theorem.
Relevant answer
It seems that you should give more details. Is this known result? Is the hypothesis that h is real? For example, the set $1-e^{ih}: h \in R$ is the circle $K(1;1)$. One can try to use induction or Bernstein inequality with Argument Principle.
If a1,a2,...,anare given positive integers in strictly increasing order, what would be the best possible lower bound of |1+za1 +za2 +...+zan | for |z|>1?
Relevant answer
It seems that 0 is exactly this lower bound in the general case. Indeed, the product of all roots of the polynomial 1+...+z^{a(m)} is 1. Therefore, it is impossible that all these roots lie within the unit circle. Hence some roots are at least on |z|=1 or even outside the unit circle.
PARITY is about whether a unary predicate of a structure has even numbers of elements in it. If a kind of logic can define PARITY, then there is a formula of this logic so that: PARITY return True on a structure iff this structure is a model of this formula. We have known that logics with counting can easily define PARITY. But what about others without counting?
Relevant answer
Right Peter. I just wanted to avoid such things as "there is a set" or "there is a function", so I proposed a first order formula containing given predicates and a function symbol. Better I make it formally here (for Arthur much more than for Peter) All finite models of the following axiom have even cardinality, and every set of even cardinality can be expanded to a model of the following axiom. Language L = { A(.), f(.) } (one unary predicate and one unary function) Axiom: for all x, y [ (A(x) ---> not A(f(x))) and (not A(x) ---> A(f(x))) and (x =/= y ---> f(x) =/= f(y) ] Let M be a finite model. The function f : M ---> M is injective. Let A \subset M be the set of realizations of A(x) and let M\ A be the set of realizations of not A(x). So f(A) \subset M\A and f(M\A) \subset A. By injectivity of f follows | A | < = | M\A | from the first and | M \ A | < = | A | from the second. So | M \ A | = | A | , so |M| is even. The theory is not complete! f can be an involution, or not. For example, M = {1, 2, 3, 4}, A = {1, 3} and f: 1 --> 2 --> 3 --> 4 --> 1 is not an involution; 1 --> 2 --> 1, 3 -->4 -->3 is an involution. The proposition "for all x f^2(x) = x" is true in the second model but false in the first. However every finite set with even cardinality can be expanded to such a model in many ways. The problem might be that the negation of this axiom does not imply that the models are of odd cardinality. OK, I guess that at this point one would really need second order predicate logic! [If there do not exist subset A and function f such that the axiom is fulfilled, then the cardinality must be odd indeed]
I am just wondering. For example, for series 1+2+3+4+5+6+.... the Tn is k, while the Sn is (n)(n+1)/2. I do know there are rules to reach the summation for each case (for example, for series k2, the summation is n(n+1)(2n+1)/2, etc), so is there a more general way to convert the summation Sn to Tn just like the the case of derivation and integration in Calculus?
Relevant answer
T(1) = S(1) and T(n) = S(n) - S(n-1) for n>1? Perhaps I just don't understand what you mean by T(n) and S(n). I'll keep quiet.
Equation: $(eiaXf)=f(ax)$, where $X$ may be unbounded operator and $a in R>0$. I have found something but I am not convinced. The important point: Operator $X$ is not in terms of $a$ and the Hilbert space L^2(R>0, dx/x). Thank you in advance.
Relevant answer
Dear Haridas Kumar Das, I enclose a paper, where I have carried out the proof of the generator of the scaling operation, see e.g. Eqs:V.3-V.6 in the enclosed Ann. Phys. article from 1983.. This particular generator concerns a theorem by Balslev and Combes, see ref. 37. and moreover this generator is an example of Stone's Theorem, M. H. STONE, The theory of representations for Boolean algebras, Transactions of the American Mathematical Society, vol. 40(1936), pp. 37-111. Best regards erkki
-
67 1983 Prigogine Ann
Phys..pdf
we say Chebyshev wavelet family or basis and why ??
Relevant answer
Thank you for answering Mr Hammad Khalil, But in the legendre one the weight function w(x)=1 but in chebyshev one we have four kinds of chebyshev wavelets and four weight functions wich are =~1 , my question is :when we construct the discret wavelet we will translate and dilate the wavelet mother, to form the chebyshev wavelet ,and we will translate and dilate the weight function too ,the result that we get is many spaces L2(wn) , where is the orthonormal basis here and in which space !!!!!
Due to the Gelfand-Neumark Theorem algebra of almost periodic functions in the sense of Bohr is isometrically isomorphic to the algebra of complex continuous functions on the space of maximal ideals of the first algebra. This space is compact and is known as Bohr's compactification of the real line. I cannot find any explicite form of elements of this space; only abstract description, Hewitt, Ross monograph for example.
Relevant answer
You did not precise if you consider the algebra of CONTINUOUS almost periodic functions, as it is supposed usually. If so, I would suspect that a typical complex homomorphism, that is not an evaluation functional, reflects behavior of elements at infinity. Say, if one takes an arbitrary tending to infinity sequence tn of reals, and an arbitrary ultrafilter U on N, then the functional F that maps every bounded almost periodic function f to the U-limit of f(tn) is a complex homomorphism. If the algebra that you consider contains also discontinuous functions, there will be also homomorphisms similar to "limit at a point": just take in the previous example tn tending to some finite point.
I know that this maybe an old question but I hope there will be more effective solutions arising. Sincerely thanks
Relevant answer
See There is an example of (discontinuous) f and g such that f has the smallest period 1, g has \sqrt{2}, and f+g has \sqrt{3}. For continuous functions this cannot happen, and the cited article contains an elementary proof of this fact.
In a Euclidean space, an object S is convex, provided the line segment connecting each pair of points in S is also within S. Examples of convex objects in the attached image include convex polyhedra and tilings containing convex polygons. Can other tilings containing convex shapes be found? Solid cubes (not hollow cubes or cubes with dents in them) are also examples of convex objects. However, crescent shapes (a partial point-filled circular disk) are non-convex . To test the non-convexity of a crescent, select a pair of points along the inner edge of a crescent and draw a line segment between the selected points. Except for the end points, the remaining points in the line segment will not be within the crescent. Except for the 3rd and 5th cubes, the cubes in the attached images are convex objects (all points bounded by walls of each cube are contained in the cube). From left-to-right, the cresent shapes are shown in the attached image are non-convex: Nakhchivan, Azerbaijan dome, Taj Mahal, flags of Algeria, Tunisia, Turkey and Turkmenistan. For more examples of crescent objects, see Can you identify other crescent shapes in art or in architecture that are non-convex? Going further, can you identify other non-convex objects in art or in architecture? The notion of convexity leads to many practical applications such as optimization image processing and antismatroids, useful in discrete event simulation, AI planning, and feasible states of learners: In science, convex sets provide a basis solving optimization and duality problems, e.g., Convex sets also appear in solving force closure in robotic grasping, e.g., Recent work has been done on decomposing 2D and 3D models into their approximate convex components. See, for example, the attached decompositions from page 6 in J.-M. Lien, Approximate convex decomposition and its applications, Ph.D. thesis, Texas A&M University, 2006: There are many other applications of the notion of convexity in Science. Can you suggest any?
- 55.28 KB
- 410.99 KB
- 67.56 KB
- 99.51 KB
Relevant answer
It is always lovely when you can form a convex polyhedron with a discrete optimization problem. It gives you properties you can exploit, some that you have pointed out. The application I'd like to point out is Scheduling. Scheduling is pretty notorious for exploiting properties of convex polyhedra if the problem as an optimization problem can be formulated as a relaxed LP (of its IP counterpart). Of course this usually means solving a linear program then rounding can be used (or in some realms is avoided, but instead primal-dual approaches are found at times). The above link is to a very important 2-approximation algorithm that showed how you can tackle heterogeneous computer scheduling (unrelated parallel machines) with linear programming, and still matches the best approximation factor to this date (some others have been proposed with the same approximation factor). New results still draw from this paper to this date (see the cited in, there are a lot). The notion of convexity can be applied to optimization problems and give us a better understanding of how well or just how we can approximate intractable problems. Hope this helps :)!
Can you suggest any material, book or paper on connection of Crossed products of C*-algebras and semigroup C*-algebras?
Relevant answer
Visit: followihttp://wwwmath.uni-muenster.de/42/fileadmin/Einrichtungen/reine/Elke/Li-nuclear-semigroupCstar.pdf
Relevant answer
Let me correct your question to the following one: how could I apprioximate a nonlinear term by a linear one? Detailed answer must be based on formulation of your problem. 1) For instance, if your problem concerns a nonlinear partial differential equation (NPDE) of the following form (H+eV)f=r, where H is linear operator, V is your non-linear term, e is perturbation small parameter and f is your target then you can apply perturbation method. 2) There are nonlinear problems which can be solved exactly. For instance there is class 1+1 dimensional of NPDE which can be handled by Inverse scattering method, or by the Backlund transformation. Et cetera. Display more about your problem.
In recent times, the study of parity-time symmetry has been quite active in diverse areas of physics (i.e. quantum field theory, open quantum systems) and in optics it has been experimentally realized in numerous kinds of settings, thus paving the way for more widespread investigation on it. But recently, in a PRL paper in a thought experiment (PRL 112, 130404 (2014)), the authors claimed that although PT-symmetry may be a useful tool in studying systems, it still seems to be far doubtful to be a fundamental theory as it locally violates non-signaling principle. That's why the very question as above arises. Your views and comments are most welcome!
Relevant answer
Dear All, This is in aid a confusion of mine that arose after looking at the prl paper mentioned in the question. While doing local evolution they maintain the norm which is consistent with PT symmetry in which the evolution operator becomes unitary corresponding to the non Hermitian PT symmetric Hamiltonian. But when they calculate probability they use usual norm. But as they say that it is not their assumption but it was an implicit assumption in the PT symmetric quantum theory. This kind of jump up between different norms is not well justified. I hope that this jump up between norms may lead to other unexpected things. For example it can be shown that local unitaries can change the entanglement content of a bipartite system. It is not possible and plausible in usual quantum mechanics. In fact, while defining a measure of quantum correlation for mixed states, one uses this assumption. So the assumption used in the paper abot evolution and calculation of probabilities should be lifted in order to probe the question of importance the PT symmetry.
Relevant answer
Thank you for this beautiful question I've got almost the same proof. I still think that formula a_k=2(((3/2)^k)(n+1)-1) (*) under the assumption Odd(a_k)=a_k/2 for all k>1 and all odd n>1 could simplify Gunter's proof: since a_k is integer and even, then ((3/2)^k)(n+1) is integer and n+1 is divided by 2^k for all k>1 and all odd n>1. It contradicts the fact that for any odd n>1 there exists k>0 such that (n+1)/ (2^k)<1 therefore n+1 is not divided by 2^k It means that there exists k>n such that Odd(a_k)<=a_k/4 and a_(k+1)<=(3/4)a_k+1 Therefore, a_(k+1)- a_k<=(-a_k)/4+1 Since a_k>=4, we get a_(k+1)- a_k<=(-a_k)/4+1<=0 So, {a_k} is not monotonically increasing. It is not monotonically decreasing neither since our sequence of positive integers {a_k}>=4 Too late... :-) Proof of formula (*) Assume Odd(a_k)=a_k/2 for all k>1 and all odd n>1 Then a_2=3(3n+1)/2+1=((3^2)/2)n+(3/2)+1 ... a_k=(3(3/2)^(k-1))n+(3/2)^(k-1)+...(3/2)+1 a_k=(3(3/2)^(k-1))n+((3/2)^k - 1)/(3/2-1)= =(3(3/2)^(k-1))n+2((3/2)^k - 1))= =2((3/2)^k)n+(3/2)^k - 1)= =2((3/2)^k)(n+1) - 1)= 2((3^k)(n+1)/(2^k) - 1)
How can we introduce the idea of infinity to students? Its properties, relationship with zero etc.?
Relevant answer
Two mirrors - one looks to another. In them there is no reflection, but infinity. What does this mean? What is it? How to explain? - Mystery of the mind. Perhaps there lies the road to infinity? Can't see through the glass end of the road. Unfortunately, in every science concept of infinity is different. What kind of science is it?
We have four sets of data for clustering, and want to cluster them into two clusters. We have only one attribute in this regard. How can we classify them?
Relevant answer
A possible option is non-parametric clustering that does not assume a specific distribution model and may work better for non-Gaussian distributions. Some options are mean shift or subtractive clustering that may also give you an idea about the number of clusters.
This equation results from modeling a nonlinear element.
Relevant answer
For a=0 and c = w^2/4 I have found a particular (not general!) solution y(t)=Q sin(wt/2), where Q=2(A/(wb))^(1/2).
It is very difficult to know the exact origins of mathematics. I would like to know about new number systems which are under research. New number systems can be used for cryptography.
Relevant answer
I don't understand what you mean by "dimension zero" of a number system. If you are looking for a "new number system" you may want to see the paper Z₂-graded Number Theory" (By Regev, Henke and myself) where we investigate the arithmetic/algebraic properties of a number system that extends the integers (that I prefer to call the "superintegers"). I don't know if it can be any use in cryptography. The main motivation to study their arithmetic properties is that they are convenient as index sets for grading superalgebras, and for indexing characters of the symmetric group, as their arithmetic operations are related to "hook numbers" of young diagrams related to characters, so they do encode some not entirely trivial combinatorics in their arithmetics.
Relevant answer
Gro Hovhannisyan: You forget the factor 6.
It was always thought impossible to have a closed-form formula that can calculate an arbitrary Nth digit of pi, until Borwein produced a formula in base 16 in the mid-1990s. My question is why is it only possible in base 16 and what is so special about "16"? Have formulas for the Nth digit of other transcendental numbers (eg. e) been produced yet? Are these always in base 16, or do they require other bases? What is the status of this research on transcendental numbers? How many now have formulas for their Nth digit and in what base?
Relevant answer
yes, the Chinese version was better than averaging Archimedes' guess 22/7 < pi < 223/71 anyone know how to solve pi from R, in A = pi(R^2), such that Egyptian and Greek square root method can be applied in that manner? Bet some nice solutions came from that ... like 22/7 and better ... thanks..
Discrete Mathematics By Sandy Irani Answers
Source: https://www.researchgate.net/topic/Pure-Mathematics
Posted by: smithaginsons.blogspot.com
0 Response to "Discrete Mathematics By Sandy Irani Answers"
Post a Comment