Jump to content

Talk:Busy beaver

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Better Introduction

[edit]

Edit (29-12-15): The change has been applied. --Negrulio (talk) 01:33, 29 December 2015 (UTC)[reply]

Edit (28-12-15): I am going for this change tomorrow. --Negrulio (talk) 16:00, 28 December 2015 (UTC)[reply]

The current article introduction has several issues. I will first cite it and then I will enumerate the problems it has:

In computability theory, a busy beaver is a Turing machine that attains the maximum number of steps performed, or maximum number of nonblank symbols finally on the tape, among all Turing machines in a certain class. The Turing machines in this class must meet certain design specifications and are required to eventually halt after being started with a blank tape.

Introduction issues

[edit]
  • It begins by defining busy beaver as a Turing Machine. I think this complicates things. The original article published by Tibor Radó in 1962 describes "Busy Beaver" as a game which involves creating binary-alphabet, halting Turing Machines that print the most 1s on a tape that starts with 0s only. Wouldn't it be better to simply describe the game, and then describe the Turing Machines it requires?
  • It is too abstract. Mentioning "a certain class" and "certain design specifications" makes it way too complicated. Summarizing the original game mentioned by Radó might make it more understandable.
  • It defines two different concepts, the bb turing machine and the bb function. I would leave the function definition to a new section on the article.

Proposed new introduction

[edit]

I propose the following introduction:

The Busy Beaver Game consists of desiging a halting, binary-alphabet Turing Machine which writes the most 1s on the tape, using only a limited set of states. The rules for the 2-state game are as follows: (i) the machine must have two states in addition to the halting state, and (ii) the tape starts with 0s only. As the player, you should conceive each state aiming for the maximum output of 1s on the tape while making sure the machine will halt eventually.
The Nth busy beaver or BB-n is the Turing Machine that wins the N-state Busy Beaver Game. That is, it attains the maximum number of 1s among all other possible N-state competing Turing Machines. The BB-2 turing machine, for instance, achieves four 1s in six steps.
The busy beaver game has implications in computability theory, the halting problem, and complexity theory. The concept was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable Functions". Negrulio (talk) 15:51, 27 December 2015 (UTC)[reply]

Busy beaver functions

[edit]

At the moment, only Σ (number of nonzeros in result) and S (number of steps taken) are mentioned. There are others that could be investigated:

  • number of symbol toggles
  • number of toggles from 0 to 1 (or 1 to 0, 0 to non-0, to higher-index symbol, etc.)
  • number of tape positions that are ever toggled
  • final distance from starting point
  • greatest distance reached from starting point
  • greatest distance from starting point at which a non-0 is ever set
  • greatest distance from starting point at which a non-0 is present at the end
  • overall explored range
  • overall range of non-0s ever set
  • overall range of non-0s at the end

.... -- Smjg 15:11, 27 September 2005 (UTC)[reply]

It says that there are [4(n+1)]^(2n) machines in this section... aren't there only [4n+2]^(2n)? Or are 1-left-halt and 1-right-halt different???

References

[edit]

I happened upon the Sci Am refrences in an old notebook of mine; they are "unverified" in that I haven't laid eyes on them in 20-odd years. The Rado reference actually appears in Booth's references for his chapter 9 titled "Turing Machines". So it is "quasi-unverified" as well. I'm staring at the pages of the Booth book, so I guess we could argue that reference is verified. wvbailey22:08, 6 January 2006 (UTC)

[edit]

The pdf is currently available at this URL:

https://www.cs.auckland.ac.nz/~chaitin/bellcom.pdf

I don't know how to properly edit references, sorry.

68.7.137.69 (talk) 04:32, 26 May 2016 (UTC)[reply]


The link to "The Busy Beaver Problem : A NEW MILLENNIUM ATTACK" is also broken — Preceding unsigned comment added by 199.203.204.197 (talk) 06:50, 28 June 2022 (UTC)[reply]

Maybe it can be replaced with https://homepages.hass.rpi.edu/heuveb/Research/BB/index.html ? Seems like the same referece in a different address — Preceding unsigned comment added by 199.203.204.197 (talk) 06:53, 28 June 2022 (UTC)[reply]

Or should it be https://homepages.hass.rpi.edu/heuveb/Research/BB/status.html ? 199.203.204.197 (talk) 06:55, 28 June 2022 (UTC)[reply]

Notation

[edit]

'1s' versus '1s' versus 'ones'

[edit]

The article makes inconsistent use of "1's", "1s", and "ones" to refer to the plural of "1". How should this be standardized? Also, what is meant by the notation "SRTM(n)"? Pmdboi 14:24, 8 March 2006 (UTC)[reply]

Unless Wikipedia has a standard contrary to my opinion I vote for "1's" when the 1's are the plural of symbol "1" from the binary collection of symbols { 0, 1 }. Standard American prose/literature wants/uses/expects "one" and "ones". Interesting that the international symbol for OFF is O and ON is | (vertical-slash, not ordinal "1": per the IEC-- can't remember the exact specication number IEC-317, Symbols? ). Emil Post specified "mark" = { | } and "blank" = { } in his models. It probably should be "|", both plural and singular, but I'd stick with "1's" per the argument: "If Bill likes it, it must be good." (Hope this helps). wvbaileyWvbailey 00:45, 9 March 2006 (UTC)[reply]
As of December 2015 I can't find any "1's" or "0's". Everything has been replaced with "1s" and "0s". There is no English standard on how to write plurals of single digits. Oxford Dictionaries Online suggest adding no apostrophe to pluralize a number except when it is a single digit, in which case "you can use an apostrophe to show the plurals of single numbers." Notice the "can". Thus, this article is correctly written. Negrulio (talk) 14:17, 27 December 2015 (UTC)[reply]

Use of up-arrow notation versus leading-superscript notation

[edit]

In the statement about known values, there is a very strange usage of the reverse-superscript notation (which, if I understand correctly, connotes tetration). The reason I find it strange is that it's being mixed in with up-arrow notation which connotes the same thing. I find this to be a bad practice, as it should just be one or the other. From the up-arrow notation article:

But in the text we currently have a seemingly-unnecessary mixing of the two notations and a failure to identify the alternate form:

   
   where  is Knuth up-arrow notation and A is Ackermann's function.

If we stick to just one notation, it should be equivalent to either:

   

OR

   

Finally, I find the use of a single \quad to be confusing (because it just looks like an extra space or so being added and still a multiply, as opposed to a clearer delimited) so I'd instead prefer something like:

   

I'll note that I'm not totally bent on this last modification if it's something that's really standard that I simply haven't seen or noticed before. Thanks. — Koyae (talk) 06:28, 25 December 2014 (UTC)[reply]

In case anyone was thinking of changing the article in response to this, please don't. There's no mixing of notations here; means , where k-2 is the number of arrows. 78.146.215.183 (talk) 15:52, 28 April 2015 (UTC)[reply]

Error?

[edit]

I have noticed a slight problem with this proof. It assumes the number of states required for EvalS is constant no matter the value of n0. What if the number of states required for EvalS varies according to n0? Perhaps the turing machine encoding of EvalS can be computed from n0?

No. If there was a Turing machine M which could create EvalS based upon n0, then we could run M on n0 to create EvalS and then run EvalS, thus having a machine with fixed size which runs EvalS. This is some justification for our definition of computability as Turing machine computability. - Sligocki 03:14, 9 December 2006 (UTC)[reply]

I noticed that the artical states that the 2 symbol, 3 state value is 14 with six 1's printed, yet elsewhere I've seen a mention of "In 1965 Rado, together with Shen Lin, proved that BB(3) is 21." I don't have the knowledge of where to go look up that state machine or to find definitive proof either way, it was just something I caught looking at the wiki page and this other article. --72.94.0.116 (talk) 06:27, 22 March 2008 (UTC) (Draco18s)[reply]

Accessibility

[edit]

Hi,

this article is pretty nice but it's very sophisticiated! I consider myself okay with maths and a minimum of computing but most of this goes over my head. Wikipedia should be accessible to laypeople, and even though it's hard when you're an expert and the audience is not, I think that the editors should dumb this down a little! If a middle school student who has no maths or computing skills cannot completely understand the article then it has no place in an encyclopedia! If something cannot be left out then include links as a minimum. -- ben 18:03, 11 July 2006 (UTC)[reply]

Is there a category where we can put requests for accessibility? Im looking at this, and it seems that unless you have a background in whatever this article is talking about, you wont be able to understand it. for example, that sequence "1, 4, 6, 13, ≥ 4098 doesnt make any sense at all. It should definately be made more clear to the average person what the reasons are for not knowing what the fifth number is when it is known that it can't be bigger than 4098. Also, whats the point of all this? The article doesnt seem to make it clear why any of this is of any importance to anything, such as why anyone would ever want to know what the fifth value that is less than 4098 is, and how knowing that number would make any difference to anything. Definately not user friendyly Carterhawk 08:23, 26 August 2006 (UTC)[reply]

You are correct. The numbers 'explode' after 4 instructions (states). And "busy beavers" have no earthly use ... yet. But who can say? Someday something incredible may come from studying them. But "busy beavers" is a "hard" topic. The "busy beaver" challenges the very best computer scientists, folks who've had years experience working with computers. wvbailey

In The Busy Beaver game, Consider a Turing machine with the binary alphabet {0, 1} ... Now start with a blank tape (i.e. every cell has a 0 in it) and a TABLE of n instructions. is confusing in the context of the classic Turing machine that has an input alphabet Σ and a tape alphabet Г, with the latter containing the former plus (at least) a blank symbol. It seems that here we have a TM with a tape alphabet of {0, 1}, where 0 is the blank symbol. Of course we don't actually care about the input alphabet, as the input is effectively ε, but all of this should be made more clear. Pascal Michel's pages, referenced in the article, while not labouring the point, do explicitly state that 0 is the blank symbol. Onkelringelhuth 15:22, 27 January 2007 (UTC)[reply]

The game of Busy Beavers: a simple explanation

[edit]
This is a two-symbol three-state Busy Beaver. It is working on a tape intially printed with 0/blanks. The robot has looked at the symbol in the window (symbol 0), has read the instruction ("state") C and is about to PRINT a 1. Then it will push the tape-LEFT button. Lastly it will look toward instruction ("state") B. (The print/erase mechanism is out of sight, beneath the window.)

"Busy beavers" is a game. The goal? To find "the instructions" that cause you as the busy beaver to print the most ones on a piece of paper tape. But like all games there are rules, and to play, first you will need to learn them.

Rule #1: Know what a "two-symbol Turing machine" is. The Turing machine is NOT easy to understand. But think of it as a really weird, clunky, simple-minded calculating-machine, the most difficult-to-use pocket calculator on the planet, but the most powerful. It has a hugely-long piece of paper tape (marked off into "squares") running through it, and 4 buttons: PRINT, ERASE, LEFT ONE SQUARE, RIGHT ONE SQUARE, a robot that pushes the buttons, and a list of instructions called "the Table" that the robot must follow.

As a busy beaver you will be the robot. You have the tape running past you. You will have a list of instructions. You (as a two-state busy-beaver) can PRINT only 1's (tally marks) or ERASE them (in other words, make blanks, print 0's) on this all-blank (all-0's) tape. You can print or overprint, erase or overerase, but only one mark on one square at a time. You can move the tape only one square LEFT or RIGHT at a time. The tape is as long as you want it to be. If you would rather, you can use the push-button console rather than doing this by hand.

RULE #2: The busy beaver always follows its unchanging list of instructions. You will need to know how to read them and you must follow them without fail. See below, with an example.

RULE #3: Always obey rules #1 and #2. You are now a computer/robot, after all.

If you can do RULE #1, #2 and #3 without mistakes you too can leave the world of humans and join the ranks of the busy beavers!

Your mission:

Your mission (and your life) as a busy beaver will come in three parts:

Mission Part I:

> Your mission as a "busy beaver" (should you choose to accept it):
>> At the start you will be given a list of instructions. (Thereafter either you or your handlers will change the instructions in Part II). Follow the instructions precisely, and print as many 1's as your instructions tell you to do before halting. To succeed you must print some ones and HALT, eventually!
> Your "Turing tape" is ready -- it is blank. Your instruction is #1. There is no time limit, no one cares (too much) how long you take. Just follow the rules and print the 1's on your blank tape.
> Robots, are you ready?
> Go!

Mission Part II:

Robots, your busy beaver trial is over. You have succeeded: you either came to HALT or you failed to HALT (how did we know this? -- ahh, that's an interesting question!), OR, you printed some ones. The score-keepers are standing by, ready to count your ones and record the instructions that you followed.

> Your new mission (should you chose to accept it):
Change the instructions you were given, so they are still a true "busy beaver Turing program" and they have the same number of instructions. For example: if your mission is to find the best 6 instructions, you must have 6 instructions. But if in instruction #3 you see ERASE, you might change this to PRINT and where you might see LEFT you might change this to RIGHT).
do part I again.

Mission Part III:

Repeat Part I and Part II forever! This is the life of a "busy beaver". Aren't you glad you're a robot and not a human!

>>>>>> <<<<<<<<

How to read busy beaver instructions:

Here is the instruction table for a 2-state Turing-machine busy beaver.

Current instruction A: Current instruction B:
PRINT/ERASE: Move tape: Next state: PRINT/ERASE: Move tape: Next state:
tape symbol is 0: PRINT RIGHT B PRINT LEFT A
tape symbol is 1: PRINT LeFT B PRINT NONE H

Busy beavers always start at "instruction A" with a blank tape (instructions are usually called "states" in Turing-world). The "HEAD" is where the scanned square is -- where your eye is looking (so "eye" might be a better word).

HEAD At start of Instruction:
A

In the instruction table, look down the column on the left. Is the scanned square (in the button console, or before you on the tape, beneath HEAD) blank (or 0), or does it have a 1 written there? It should be blank/0, because we're starting from scratch. Since it is blank/0, under A follow the top row from left to right and do the following in this order:

PRINT (mark the square with a 1)
RIGHT (move the tape to the right)
GO TO INSTRUCTION B
HEAD At start of Instruction:
A
1 B


At instruction B we look to see if the scanned symbol is a 1 or a blank/0. Since we know it is a blank/0, we follow the top row again, but under B, and do the following:

PRINT (mark the square)
LEFT (tape to the left)
GO TO INSTRUCTION A
HEAD At start of Instruction:
A
1 B
1 1 A

Now that we find that there's a 1 printed on the scanned square. So we follow the bottom row under A and find the instructions are:

PRINT
LEFT
GO TO INSTRUCTION B
HEAD At start of Instruction:
A
1 B
1 1 A
1 1 B

We continue this, when finally we hit the HALT instruction. We're done!:

HEAD At start of Instruction:
A
1 B
1 1 A
1 1 B
1 1 1 A
1 1 1 1 B
1 1 1 1 H

You as a busy beaver have printed 4 ones. They didn't have to be all in a row, but indeed this is nice work. You have done as well as any "2-state 2-symbol busy beaver" can do. Now its time to go to a three state instruction. Robots: are you ready?

The end.

>>>>>> <<<<<<<<

An example of a "little" busy beaver -- a one, two, three, or four-state busy beaver -- can be "run" on any spreadsheet such as Excel. Five and six-state busy beavers cannot -- their "productions" -- the numbers of ones they can print -- are too huge to fit. You will need some help setting a model up. It will help you to know what the INDEX(....) instruction does (for example, INDEX(B5:B20,,A3)), but you can make a Turing machine on a spreadsheet without this. Still it's kind of tricky. For an example of what a busy beaver's "run" looks like, see Turing machine examples and Post-Turing machine.wvbaileyWvbailey 17:39, 8 September 2006 (UTC)[reply]

4-state busy beaver example

[edit]
Run on Excel. Transposed sideways to fit on page. A. Brady's machine.

The following is a test-case to see what it looks like. It will look better with Netscape viewer. You can find the 2-state 2-symbol busy beaver at Post-Turing machine and as mentioned in the article, the 3-state 2-symbol busy beaver at Turing machine examples. wvbaileyWvbailey 20:37, 22 August 2006 (UTC)[reply]

wvbailey

[edit]

wvbailey's example should go on the main page. The main page currently makes no sense to anyone who doesn't already understand the subject. His explanation and example was entirely understandable and at least allowed me to understand what was being described in the main article. Thanks for that wvbailey!

Too technical

[edit]

A few readers (see above) have requested that the article contain some context, explanation and example. I don't feel competent to address the more technical parts here (I could work on the historical -- I want to see Rado's paper, for example). I would suggest the following:

  • Make it clear that busy beavers is "a game" (albeit a weird mathematical game) that anyone can play. It has no apparent "use" -- see next bullet-point.
  • Historical context: Why was busy beavers proposed by Rado? Brady hints that it has to do with questions around "the halting problem", instances of tiny machines should be amenable to solution of "the problem" but even these tiny ones are virtually "intractable".
  • Who works on busy beavers? The only name I know is Allen H. Brady; cf his paper referenced.
  • A brief description of the algorithm(s) used to find the busiest beaver.
  • How do we know that a particular busy beaver under test (e.g. a 6-state one) is not locked in a loop? How do we know when to "give up" and "call it a day"?
  • Heuristics: are they used? Can a casual reader figure out which b-b's won't work, and why? (certainly some are trivial, such as any accessible state with both 0 and 1 reverting back to itself (to make a circle)
  • Genetic algorithms used in any way? Parallel algorithms used to sift through the possibles?
  • Examples of the simplest cases. For example Post-Turing machine and Turing machine examples where I've used 2- and 3-state two-symbol busy beavers as examples (thus avoiding "original research"). But really those should be repeated here, or especially the busy beaver on Turing machine examples should actually be here, and the Turing machine article refer here.
  • Why does the problem "explode?" It seems to explode in two ways: both in the number of possible b-b's per number of states, and (ii) the number of states a particular instance under test might go through.
  • Provide a simple explanation of the number of possible busy beaver machines per number of states N (it looks to me like it is (8*N)^N). But many are silly, hence the need for heuristics.
  • Better (more) print references, in-line references too. Any books out there specifically on busy beavers?

Suggestions? Comments? wvbaileyWvbailey 14:47, 15 September 2006 (UTC)[reply]

Content Update

[edit]

I've added quite a bit of new information based upon Rado's original paper and some of your comments on this talk page. I hope that this has made the page more readable and accessible. Because of the major content change, I have removed the Confusing tag on this page. I hope that you all will review the page to see if it ought to be put back or not. If you do have any comments or requests, I would be very interested in hearing them.

Happy Busy Beavering! Sligocki 17:30, 12 December 2006 (UTC)[reply]

I very much appreciate what you've done here. I scoured the Dartmouth library for a a cc of Rado's original paper but was unable to find one. I do have a second paper -- the Lin-Rado paper -- but not the very original one. I see someone has added yet another tag at the bottom. But without the original Rado paper I feel unprepared to do an effective review/edit. Where does one find a cc of the original paper? Lemme know, thanks. wvbaileyWvbailey 21:39, 27 January 2007 (UTC)[reply]

Non-computability of Σ : emphasizing a possible pitfall

[edit]

The article states that though the busy beaver fonction itself is non-computable, any finite sequence Σ(0),...,Σ(n) is computable. Of course, this is true -- indeed, it is true for every finite set of natural numbers : the trivial algorithm which, on the input of n, prints the value Σ(n), solves the problem. To put it another way, it only means that Σ restricted to [1, N] can be described finitely -- which is obvious -- and nothing more : while it is not, theoretically, impossible to compute Σ(n) for some given n, we might never be able to, even with unimaginably powerful machines. The article arguably might suggest some kind of stronger property to a non-initiated reader. I'm trying to rephrase it carefully. Dabsent (talk) 11:12, 31 May 2008 (UTC)[reply]

Thanks for pointing out the room for improved wording, but that's not what I wrote; what I wrote was

Although Σ is a non-computable function, nevertheless for each natural number n, the finite sequence Σ(0), Σ(1), Σ(2), ..., Σ(n) is computable.

That statement is correct and unambiguous, imo, but to put more emphasis on the pitfall that arises for anyone prone to "quantifier dyslexia", I've reworded it. This is, after all, exactly the issue I was originally wanting to emphasize. I also replaced your explanation with a link to explanatory examples at computable function#Examples (see the first & last examples given there).--r.e.s. (talk) 21:32, 31 May 2008 (UTC)[reply]
I agreed it was perfectly true, and shouldn't have said 'amibguous', because it technically isn't. My wording here was less precise than the original, but my point -- I'm afraid I didn't explain myself properly -- wasn't that the presence of the quantifier wasn't clear enough : the difficulty, I believe, lies in the idea that, for every finite sequence, there exists an algorithm to compute it, or more precisely, that with our definition of "algorithmically solvable", every particular instance of a problem can trivially be "algorithmically solvable" without the problem itself being "algorithmically solvable". The formal definition of e.g. a Turing machine makes this obvious, but from the point of view of intuition, this is a subtelty and not a triviality, as long as one isn't fully accustomed to it : an algorithm, in the usual sense, is a procedure for solving a class of problems.
I'm ok with he current wording and link, it's better that way.
Dabsent (talk) 18:29, 1 June 2008 (UTC)[reply]
Actually, the possibility of developing a concept of "effective calculability/noncalculability" that would apply to the calculation of a single individual integer — and therefore quite different from the now-standard notion of (non-)computability — was a major motivation for Rádo and Lin. Here's a quote from their 1965 paper:

Our interest in these very special problems was motivated by the fact that at present there is no formal concept available for the “effective calculability” of individual well-defined integers like Σ(4),Σ(5), .... (We are indebted to Professor Kleene of the University of Wisconsin for this information.) We felt therefore that the actual evaluation of Σ(3), SH(3) may yield some clues regarding the formulation of a fruitful concept for the effective calculability (and noncalculability) of individual well-defined integers.

(The quotation is in L. De Mol's "Tracing Unsolvability", p.462.) Although the present section of the article is about non-computability, perhaps this point about the search for a different type of "non-computability" should be emphasized more?
--r.e.s. (talk) 01:17, 2 June 2008 (UTC)[reply]
Hmm, if I understand this correctly, does this imply that e.g. there is a deterministic, although long-running, algorithm to determine the truth of any particular universal statement over a countable set with a computable predicate, such as Goldbach's conjecture? If so, this would appear to be in opposition to some of the argument at halting problem for why undecidability of the halting problem is "unsurprising" or "intuitive." Dcoetzee 08:19, 2 June 2008 (UTC)[reply]
Since "countable" doesn't mean "finite", I would say the answer to your question is "no". But when restricted to finitely many cases, an otherwise-unsolvable decision problem necessarily becomes solvable, because solvability, like computability, technically refers to the mere existence of the required algorithm — not to anyone actually "producing" the algorithm. Rado was evidently exploring the possibility that such an algorithm, though technically existing to compute a single finite instance (e.g. the single integer Σ(n0) for some individual integer n0), might nevertheless be logically unproduceable (my term), as distinct from merely being overwhelmingly impractical to produce and/or execute. As far as I know, this is still an open question.
--r.e.s. (talk) 13:35, 2 June 2008 (UTC)[reply]
Thanks R.e.s. for the link and information ; this is very interesting. I'll think about it.
If you wish to write more about this, by any means do : it certainly would be a valuable contribution to the article. However, I do not regard myself qualified for now and have no idea about the current state of research on this topic.
Regardind Dcoetzee's question : the idea of the procedure based on the busy-beaver is to produce a Turing machine (or some kind of program in a broader sense) able to test the validity of the conjecture for any given n. Let's say this program is of size s. You compute S(s), then run S(s) operations of your program. Now, if you consider a set of problems of the kind you described, all of whom can be described by programs of size less than s, you have indeed an algorithm able to solve them all, which embeds the value S(s) or equivalently an algorithm computing it : the same procedure applies to all of them. On the other hand, if you consider all the problems you mention, the size of the programs needed to test the conjectures on some values of n will not be bounded. A similar procedure would thus need the values of S for infinitely many s, that is, would need to embed an algorithm computing S, which doesn't exist. So : there needn't exist such a general procedure. I hope I understood the question correctly ?
Dabsent (talk) 15:56, 2 June 2008 (UTC)[reply]
Yes, but I was unclear - I meant to say that for any fixed Turing machine T, there is an algorithm (depending on T) to decide its halting problem; but now that I look at it again, this is actually rather obvious, since the constant algorithm returning either true or false would suffice (depending on T). The fact that once T is fixed you can solve its halting problem by first solving the busy beaver function for its state size is a rather roundabout way of doing it. Dcoetzee 16:53, 2 June 2008 (UTC)[reply]
I was apparently too hasty in replying "no" to Dcoetzee, according to Chaiten's article "Computing the Busy Beaver Function". There it's asserted that on information-theoretical grounds, for a conjecture with predicate P of the type Dcoetzee asked about, there exists a natural number m such that if P is verified for the finitely-many natural numbers n < m, then P must hold for all n:
"...it would suffice to have a bound on how far it is necessary to test P before settling the conjecture in the affirmative if no counterexample has been found, and of course rejecting it if one was discovered. Σ provides this bound, for if P has program-size complexity or algorithmic information content k, then it suffices to examine the first Σ(k +O(1)) natural numbers to decide whether or not P is always true." (p. 3)
However surprising this may be, it seems to answer Dcoetzee's question in the affirmative.
--r.e.s. (talk) 19:11, 2 June 2008 (UTC)[reply]
It seems to me that this is precisely what stands in the present article in the particular case of the Goldbach's conjecture (section Applications) ; it is easily generalized. But as I said the algorithm depends on the problem : it cannot become a general algorithm precisely because the busy beaver function is non-computable. So there is no algorithm capable of determining the truth "of any particular..." but "For every particular ..." there exists an algorithm. Quantifiers expressed in natural language are an infinite source of misunderstanding -- I apologize.
Dabsent (talk) 21:01, 2 June 2008 (UTC)[reply]
The Applications section does already nicely address such a result directly in terms of Rado's S function, rather than Σ, and without reference to algorithmic information theory. Actually, a (weaker) result in terms of Σ follows easily from the result in terms of S by using any of various inequalities established by Julstrom, et al., (e.g., S(n) ≤ Σ(3n+6)). I doubt that it's worth introducing any of these twists into that section, though, as it reads very well as is.
--r.e.s. (talk) 08:03, 3 June 2008 (UTC)[reply]

Infinite Busy Beavers?

[edit]

I was wondering if it would be possible to extend the notion of Turing Machines to transfinite numbers of states or symbols. If, for example, we labeled the states by the natural numbers (instead of alphabetically), and used two algorithms to determine the instructions at each state for symbol 0 and for symbol 1, would it be possible to create a machine that halts after an infinite number of steps and/or prints an infinite number of 1's? Alternatively, could we use an infinite number of symbols and a finite number of states? Or a infinite number of symbols and of states?

I haven't thought too long about it, but it seems like it should be possible, perhaps even easy, to create such machines that halt either in a finite or infinite number of steps, or which do not halt. Ones that do not halt are trivial, and ones that halt in a finite number of steps obviously do not need an infinite number of states or symbols, but ones that halt after some infinite time span might be interesting.

Finally, if such a thing is possible, would it make a difference which infinite number of steps it takes ( vs. vs. vs. , etc.)? Eebster the Great (talk) 03:49, 1 October 2008 (UTC)[reply]

This is a very cool idea, but I don't think you'll be able to construct such a thing. Here's the crux of it, what does it mean for a TM to halt after an infinite number of steps? We can easily define the state it will be in at any finite time, but how we define where it will be at time ? Perhaps if the TM is in a very simple loop whereby it remains in the same state for all time ≥n , then we could say that it will be in that state at time . But then it won't halt anyway. You'll run into other issues as well, for example, you could run out of tape (it is only long!). Sorry, Sligocki (talk) 12:53, 23 November 2008 (UTC)[reply]
The idea is that the TM has an infinite number of states, too. For example, if the TM has states, and each state has instructions following some sort of algorithm that can be generalized to an infinite number of steps and states, then it appears it could run for an infinite number of steps, going through an infinite number of states (perhaps some more than once), and still halt. The length of tape would not be a problem, because that can always be expanded. For example, instead of a tape divided into squares, you could have a plane divided into squares, and you would be fine. I'm not saying this generalization is possible (or useful), but it does seem like it ought to be. Eebster the Great (talk) 21:41, 25 November 2008 (UTC)[reply]
You can't exactly do this since a turing machine with infinitely many states can potentially recognize any finite string or go arbitrarily far before stopping. As I understand it you can ask about equivalents of the Busy Beaver when you have access to some reasonably simple Oracle machine. JoshuaZ (talk) 22:00, 25 November 2008 (UTC)[reply]

Applications, Goldbach's conjecture

[edit]

This doesn't seem right to me. It seems to be saying, of the Goldbach conjecture, that "There exists an N such that, if no counterexamples n < N exist, then no counterexamples n > N exist either." And we find N by coming up with a Turing machine to sequentially test for counterexamples and then plug its number of states and number of symbols into the busy beaver function.

So firstly that's a massive claim. And I figure it needs some citation. Secondly, I don't believe it, but it's hard to say why. Of course if you imagine a computer program searching for counterexamples, memory etc. means there's always a limit to how high you can look. (In 2GB of RAM I guess my computer couldn't look past 2^16,000,000,000 for counterexamples, at least without going to the hard drive.) But I'm also having trouble imagining a turing machine that could solve the problem. —Preceding unsigned comment added by 58.96.121.81 (talk) 05:31, 16 December 2008 (UTC)[reply]

This is quite a profound claim. But it is well supported. I have added a Chaitin paper that specifically states what has been written here. Unfortunately, the paper goes into less detail than this Wikipedia article (it is intended for Algorithmic Information Theorists), so it may not help you wrap your head around the concept.
To me, the busy beaver problem is so exciting because of how many non-intuitive (or even unbelievable) properties that it has. Many people have considered this bizarre property and one way to think about it is that finding S(n) requires solving all mathematical problems that can be encoded into an n-state Turing machine. Therefore we could solve the Goldbach conjecture if we knew a sufficiently large S(n), but proving that S(n) would require solving the Goldbach conjecture. I'm afraid that this is slipping a bit over into philosophy, but I hope that it helps to understand this property of busy beavers. Please feel free to contact me if you'd like to talk more. Sligocki (talk) 21:24, 16 December 2008 (UTC)[reply]
Also, I don't know the proper way to reference sources, so someone who does should fix my attempt. Thanks! Sligocki (talk) 21:26, 16 December 2008 (UTC)[reply]
Sligocki, unfortunately this really isn't a profound claim as the busy beaver function is *uncomputable*. Being uncomputable is really kinda weird and everything. You cannot find answers to the busy beaver problem in general. If the Goldbach conjecture could be represented by a x-state Turing machine then solving sigma(x) would require you prove or disprove the Goldbach Conjecture, among many other problems. Solving the Goldbach Conjecture is a simpler problem then computing sigma(x). So while what you are saying is technically true, it is kinda like saying fire is one of the applications of the Apollo program. So that whole section should probably be removed.98.218.10.165 (talk) 01:12, 8 May 2009 (UTC)[reply]
This is precisely the point, really. Just as the ability to solve the halting problem would allow you to automatically resolve a number of open conjectures, so would the ability to compute the busy beaver function. This is intended to give some intuition for why it ought to be so "hard." Dcoetzee 01:39, 9 May 2009 (UTC)[reply]
I'm not so sure either about the usability of uncomputable functions in solving anything except themselves. But what I am absolutely sure of, like everyone with at least some academic mathematical knowledge should be, is that there is NO WAY that a finite Turing machine could be programmed to check EVERY even natural number about anything they don't already have an algorithm for. I'm a temporal finitist and do not accept the concept of actual infinity, so I consider the natural numbers not as an infinite set but rather an arbitrarily large class of hereditary finite sets and even I say that Sligocki is just plain wrong here. Sorry. If you accept the standard notion of INFINITELY many even natural numbers you can not by any FINITE method check them ALL one by one. My view of the infinite as only a potential concept is really just a semantical one, I accept arbitrarily many numbers larger than any limit and infinitesimals as close to zero as can be defined.
But infinity does not evolve from finite objects without a specific axiom. Theorems considering ALL even natural numbers can only be proved by some sort of induction or by showing they are valid for an arbitrarily chosen one. Can't check them all. —Preceding unsigned comment added by 91.133.35.83 (talk) 18:42, 28 November 2009 (UTC)[reply]
There is an algorithm for checking Goldbach's conjecture for any even number. To check it for N we simply compute all prime numbers p less than N and see if N-p is a prime. There are many computable algorithms to do this (e.g. the Sieve of Eratosthenes). If this works we continue on and check the next even number, otherwise we fail. The question is do we ever fail? If so, Goldbach's conjecture has been disproven by a counterexample. If this algorithm never halts, then Goldbach's conjecture is true. Thus knowing the busy beaver number would make the Goldbach conjecture computable. Cheers, — sligocki (talk) 08:47, 29 November 2009 (UTC)[reply]
You are right, we cannot check all even numbers in finite time with this algorithm. But, if Goldbach is true, it will check each even number eventually. Thus it is guaranteed to find any counter-example and it is guaranteed to never halt if there are no counter-examples. Thus there is a connection between the value of the busy beaver function and Goldbach's conjecture, without referring to any sort of infinities. Cheers, — sligocki (talk) 23:13, 30 November 2009 (UTC)[reply]

My apologies to Mr. Chaitin, who has never stated that his constant or Busy Beavers can be used to solve Riemann's or Goldbach's unlike someone claims at the Chaitin's constant page. You can't check each even number eventually "without referring to any sort of infinities" unless you're a real die-hardcore ultrafinitist denying the existence of more than a finite number of integers. If Goldbach's is true then no matter how big a Turing machine you build, it can only check the first n even integers. You can't have an algorithm with computable length checking all the numbers. If you don't find a counterexample below the number given by calculating the Busy Beaver function of Turing machines with programs the length of Graham's number, it STILL doesn't prove a counterexample doesn't exist. By checking numbers you can only prove it wrong, never right. To prove it right you need induction or some other kind of THEOREM. I'm not saying a Busy Beaver wouldn't be a helpful tool in finding large primes or perfect numbers or proving Goldbach's wrong if it is wrong and getting arbitrarily large lower bounds for the possible counterexample. It is NOT "guaranteed to find any counter-example". Only any counterexample below any preset bound, however large but still finite.

Of course there's the teeny-weeny problem of Busy Beavers being non-computable and growing faster than any computable function, so the only way to find a Busy Beaver for a class of Turing machines is to run every possible program and at some point prove all still going on to never be going to terminate, the complexity of such a proof also growing at a non-computable rate, and then declaring the machine which went on the longest before halting the Busiest Beaver. During that process you have ran trough all possible algorithms of that class, but you can assign them with different meanings next time. I think building a quantum computer offers far more feasible prospects in the quest for bigger and better means of numbercrushing.

Check out Gregory Chaitin's website, he's got a lot to say about a lot of subjects, computability being only one of them. I once again apologize for hurriedly blaming him for someone drawing false conclusions from his amazing work.

A rather recent lecture on computability, in which he repeatedly states that you can only prove Riemann's or "no odd perfect number" hypothesis or generally any conjecture equivalent to a halting problem WRONG here: http://www.cs.auckland.ac.nz/~chaitin/wlu.html Though he doesn't specifically name Goldbach's or a,b,c,d>1 a^b+1=c^d only when a=d=2 and b=c=3 there they are also of the type equivalent to a halting problem. —Preceding unsigned comment added by 91.133.35.83 (talkcontribs) 07:56, 5 December 2009

Dear 91.133.35.83, I do not understand your argument about ultrafinitists or numbercrushing, but you are incorrect. Chaitin clearly says in his paper (page 3 middle of the page):

and he goes on to explain exactly how. He is an expert in the field and he is correct. But let me go further, here is a finite python program that will search for a counter-example to the Goldbach conjecture:

def goldbach_check():
  """Check Goldbach's conjecture for each even number N until it fails. If it never fails, never return."""
  N = 4
  while True:
    # Test Goldbach's conjecture for N.
    if not sum_of_primes(N):
      return N # If N is not the sum of two primes, fail.
    # Otherwise try the next even number.
    N += 2

def sum_of_primes(N):
  """Is N the sum of 2 primes?"""
  # Try all ways N could be the sum of 2 integers.
  for k in range(N):
    if is_prime(k) and is_prime(N-k):
      return True # N is the sum of two primes (k and N-k)
  # If none work, then N is not the sum of two primes.
  return False
Now, if there is an even number N which is not the sum of two primes, goldbach_check() will return it. Thus if Goldbach's conjecture is false, this algorithm will provide the counter-example. Conversely, if this program never halts, then that implies that Goldbach's conjecture is true. How do we know if the program will every halt? Busy Beaver numbers. Cheers, — sligocki (talk) 09:42, 5 December 2009 (UTC)[reply]

" and he goes on to explain exactly how" The explanation goes like this

"An experimental approach is to use a fast computer to check whether or not P is true, say for the �first billion natural numbers. To convert this empirical approach into a proof, it would suffice to have a bound on how far it is necessary to test P before settling the conjecture in the affirmative if no counterexample has been found, and of course rejecting it if one was discovered. SIGMA provides this bound, for if P has program-size complexity or algorithmic information content k, then it suffices to examine the f�irst SIGMA(k +O(1)) natural numbers to decide whether or not P is always true. Note that the program-size complexity or algorithmic information content of a famous conjecture P is usually quite small; it is hard to get excited about a conjecture that takes a hundred pages to state."

Neither Goldbach nor Riemann has program-size complexity or algorithmic information content. Read the newer article.


Here is the fallacy in the program: The subprogram

# Try all ways N could be the sum of 2 integers.
 for k in range(N):
   if is_prime(k) and is_prime(N-k):
     return True # N is the sum of two primes (k and N-k)
 # If none work, then N is not the sum of two primes.

grows infinitely complex. Only the sums of odd integers need be checked, that is N/4 ways. You can improve a little but it grows the rate N/k where k is not much. "say for the �first billion natural numbers" For N=1 000 000 000 that certainly is over 1 000 000 steps. For N= 1 000 002 that is the same amount of NEW steps none of which are previously taken. The program is finite in python but no finite Turing machine is a model for even the subprogram.

There is also no general function "is_prime". There is no general finite algorithm either. Erastothenes' sieve is longer for larger numbers. There certainly isn't a Turing machine or subprogram "is_prime" For every new N you need to check if N-3 is prime. Rantalaiho74 (talk) 18:21, 1 October 2010 (UTC) a.k.a.91.133.35.83[reply]

CT-Thesis

[edit]

"There is an analog to the Σ function for Minsky machines ... This is a consequence of the Church-Turing thesis." Please excuse me if I misunderstand the concepts here. I thought that the equivalence of register machines and turing machines was a mathematical theorem, whereas the CT-thesis was an unproven philosophical assertion. Should it say instead "There is an analog for register machines because somebody proved they are equivalent", perhaps followed by "it is probably impossible to calculate the function by any means because of the CT-thesis." —Preceding unsigned comment added by 24.2.48.202 (talk) 20:02, 9 January 2009 (UTC)[reply]

This sounds like a legitimate concern to me. The Church-Turing thesis is not a theorem and we should avoid language referring to it as such. Dcoetzee 21:05, 9 January 2009 (UTC)[reply]
Yes, I agree and that statement was redundant based on the opening paragraph to the section, so I've removed it. Perhaps something should be said about why there is an analogy to the busy beaver in any formal model of computation. In fact it looks like the Wikipedia article model of computation is rather lacking. Sligocki (talk) 21:03, 15 January 2009 (UTC)[reply]

Green's Lower Bounds

[edit]

In the Known Values section, the present article states the following:

Milton Green constructed a set of machines demonstrating that 

: (Where  is Knuth up-arrow notation and A is Ackermann's function)

in his 1964 paper "A Lower Bound on Rado's Sigma Function for Binary Turing Machines". 

That seems to be a (possibly incorrect?) result of someone other than Green. I don't have Green's paper, but I do have two papers that discuss it, and they seem not to support the above inequalities. The first paper is Improved Bounds for Functions related to Busy Beavers by Ben-Amram and Petersen, 2002, which (on p.3) merely cites Green 1964 as showing that Σ(4n + 3) > A(n,n), where A is the Ackermann function. (Note the "4n+3" rather than "2n+4".) The second paper is Ackermann's Function in the Numbers of 1s Generated by Green's Machines by Julstrom, 2002. This paper discusses a function fM(x,y) (whose definition is very similar to that of the Ackermann function), which is the number of 1s left on the tape by a 2x-state Green's machine when started on the rightmost 1 of a tape containing a block of y 1s. Such a block of y 1s can be produced by a (y+1)-state TM started on an all-0 tape, so evidently one has the lower bound Σ(2x+y+1) ≥ fM(x,y); e.g., y = 3 gives Σ(2x+4) ≥ fM(x,3). However, this does not support the inequality stated in the article, as it is not generally the case that fM(x,3) > 3^^...^3 (with x up-arrows); e.g., one finds fM(3,3) = 45 < 3^^^3. Possibly the inequality should be Σ(2k+4) > 3^^...^3 (with k-2 3s) > A(k-2,k-2) ?

— r.e.s. 14:52, 16 November 2009 (UTC)[reply]

Yes I wrote that part and some of it is my own original research, sorry. I'll look up the original equations that I used for those inequalities and get back to you. Cheers, — sligocki (talk) 16:33, 16 November 2009 (UTC)[reply]
Unfortunately, I lost the copy of Green's paper I used to have. But I have written down that in his paper he gives an equation for the growth of his machines as by
and
for n odd and
for n even
The BBn are the Sigma scores for Green's machines. I derived the relation with the uparrow and Ackermann functions from these recurrences. I was also a confused by the bound form Ben-Abram's paper. (oops, misread the previous post — sligocki (talk) 03:22, 18 November 2009 (UTC)) Cheers, — sligocki (talk) 16:48, 16 November 2009 (UTC)[reply]
So,
And assuming :
And so:
Also, based on the Ackermann function article, . Those are the derivations I made. Cheers, — sligocki (talk) 03:18, 18 November 2009 (UTC)[reply]

I re-aquired a copy of Green's paper. Of note: the numbers I have called here, he refers to as and he [Green] uses for a different sequence of machines (Class G machines) which appear to be what Ben-Abrams refers to. On the other had, Julstrom talks about Class M machines, which are yet another class Green used in the paper. Thus, I believe that we are all bounding completely different machines. Cheers, — sligocki (talk) 04:26, 18 November 2009 (UTC)[reply]

Now that I've had a look at Green's paper and have confirmed the "starting equations" cited above, it's evident that your results are correct. Very neat! It's interesting that the expression, which appears also in Graham's number, can be introduced here in a very natural and uncontrived way.
— r.e.s. 15:32, 19 November 2009 (UTC)[reply]
Thanks for the confirmation. I think the 3 ^(k) 3 expression is mostly a coincidence. If you analyze the M class and G class machines, you'll find that they grow at different rates. I think the M-class grow similar to 2 ^(k) n+1 on a tape starting with n 1s and G-class grow similar to 3 ^(k) n+2, but they could be easily defined for even machines which would grow as 2 ^(k) n+2. However, I was pretty happy with how natural these lower bounds were to derive, I tried to get some upper bounds as well and that was not nearly so nice, you can see some of my partial analysis at User:sligocki/Green's numbers. Let me know if you have any ideas :) Cheers, — sligocki (talk) 23:34, 19 November 2009 (UTC)[reply]
I haven't had time to study these results as much as I'd like. (BTW, in your article there's still a reference to Gn, which is presumably supposed to be BBn.)
As an aside, however, I notice that a consequence of
is that it puts to shame the bound for Σ(12) quoted from Dewdney in the present article. In particular, the bound that we get happens to be exactly the number g1 in Graham's sequence; that is,
.
The discussion about the number g1 might help the general reader to see how enormuously larger this is than the exponential tower quoted from Dewdney.
— r.e.s. 14:14, 22 November 2009 (UTC)[reply]

For a possibly more-direct way of seeing that         is implicit in Green's results, we can define

    and     ,    noting that

    and that     .

Then Green's definition of the B-functions yields

which can be directly compared line-by-line to the definition of the c-functions:

,

giving the desired inequality:

.

Hence

,

where the case for k = 2 is treated separately and follows from the known value Σ(4) = 13.

— r.e.s. 14:20, 23 November 2009 (UTC)[reply]

An uninteresting statement

[edit]

The article says: "there is, for each input n, an algorithm An that outputs the number Σ(n) (see examples).". But what's so remarkable about this? This is true of all integer-valued functions, and to define such an algorithm is trivial. - Gigasoft 84.211.109.82 (talk) 04:51, 3 April 2010 (UTC)[reply]

Imho, the quoted statement is a triviality that deserves to be emphasised for the sake of readers unfamiliar with the subject. I believe there is a tendency for such readers, upon first encountering the fact that "Σ is not computable", to mistakenly think this implies that there exists some n for which Σ(n) is not computable. — r.e.s. (talk) 13:47, 3 April 2010 (UTC)[reply]
Ditto. This is a rather subtle distinction and deserves proper emphasis. But feel free to edit the wording if you don't think that the reason for this emphasis is clear. Cheers, — sligocki (talk) 04:29, 8 April 2010 (UTC)[reply]

Section "Non-computability of Σ"

[edit]

Here was written:

A trivial but noteworthy fact is that every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is computable

But why is it "trivial"? Perhaps for some n there would be such a Turing machine that we couldn't identify whether it halts or not.

Eugepros (talk) 11:18, 5 August 2010 (UTC)[reply]

Oh, sorry, I understand. The Turing machine in itself is such an algorithm: Even if we cannot identify whether it halts or not, the predicate "it halts" is defined as partial recursive (but not necessarily total recursive) function. Perhaps it's worthwhile to explain explicitly?

Eugepros (talk) 10:56, 9 August 2010 (UTC)[reply]

The noteworthy fact is simply that for any finite n, the sequence Σ(0), Σ(1), Σ(2), ..., Σ(n) is trivially computed by a program such as "PRINT <Σ(0)>, <Σ(1)>, <Σ(2)>, ..., <Σ(n)>", where <x> stands for the decimal representation of x. E.g., as mentioned in Computable_function#Examples, "PRINT 0, 1, 4, 6, 13" trivially computes Σ(0), Σ(1), Σ(2), Σ(3), Σ(4). A similar program exists for every finite n, whereas no such program exists for the entire infinite sequence (because a program is by definition a finite-length string). Perhaps this fact should not be called trivial, as it's the computation involved that's trivial. I've reworded the sentence accordingly. — r.e.s. (talk) 16:22, 9 August 2010 (UTC)[reply]
Another point ... A BB Turing machine that leaves Σ(n) '1's on the tape might not serve as the very algorithm for Σ(n), because standard conventions (unlike the BB game) require the '1's that encode output to be in a contiguous block. (edited) — r.e.s. (talk) 01:51, 10 August 2010 (UTC)[reply]

Hmm, I don't understand why "A similar program exists for every finite n". Actually I know that we haven't at our disposal even program for "PRINT <Σ(10)>". And we can face fundamental mathematical problems, trying to find the number Σ(10), because halting problem for some Turing machine (of 10 states) might be unresolvable. Can't we? — Eugepros (talk) 11:22, 10 August 2010 (UTC)[reply]

Oh, I beg pardon for unintended applying of constructive logic. I understand that you deduce "Σ(10) exists" from "every 10's state Turing machine or halts, or not". But this axiomatics is too strong for me... I'm interested in what can say about computability of Σ(n) those, who don't accept law of excluded middle... Eugepros (talk) 12:28, 10 August 2010 (UTC)[reply]

I think that when you say "And we can face fundamental mathematical problems, trying to find the number Σ(10), because halting problem for some Turing machine (of 10 states) might be unresolvable. Can't we?", you are actually asking the very kind of question that concerned Rádo & Lin in the first place. As I noted in the above section Non-computability of Σ: emphasizing a possible pitfall, Rádo & Lin explicitly stated that they were looking for "clues regarding the formulation of a fruitful concept for the effective calculability (and noncalculability) of individual well-defined integers". As far as I know, no such concept has yet been formulated, and I don't know to what extent constructive logic might play a role.
r.e.s. (talk) 16:49, 10 August 2010 (UTC)[reply]

As to the role of constructive logic: As far as I know, it was specially developed to answer such kind of questions. Constructive proof of existence for such an individual integer (like Σ(10)) is supposed to be the proof of its "effective calculability". Or did I misunderstand something? Eugepros (talk) 08:44, 11 August 2010 (UTC)[reply]

Whatever role constructive logic might (or might not) have in this issue, evidently Rádo & Lin themselves regarded it as inadequate to their purposes. I say this because constructive logic was already well-developed when they wrote (in 1963) that "at present there is no formal concept available for the “effective calculability” of individual well-defined integers like Σ(4),Σ(5), ...", and that therefore "it is of course not possible to state in precise form the conjecture that there exist values of n for which Σ(n) is not effectively calculable." (Please excuse the bolding, but I think this conjecture is important, and, however ill-formulated, probably deserves to be mentioned in the article.) Also, Rádo had written earlier in the 1962 paper that "this [principle of the largest element] ... may take us well beyond the realm of constructive mathematics", so it seems they were deliberately looking beyond constructive mathematics to formulate the as-yet unavailable concepts.
r.e.s. (talk) 15:51, 11 August 2010 (UTC)[reply]

Conserning the conjecture that "there exist values of n for which Σ(n) is not effectively calculable": I guess that it would be more precise to say that it's possible to state, but not possible to prove. This conjecture is formalized as in the constructive predicate calculus. But it isn't the case of the classical predicate calculus, because in the last one it's trivially disproved (using the law of excluded middle).

In the constructive logic (see Brouwer–Heyting–Kolmogorov interpretation) the proof of is the pair of an individual number n and the proof of . The last proof is the implication to absurdity from the conjecture , it means that we should find such an individual Turing machine M, for which it's undecidable - does it halt or not. We cannot formalize the concept of "proven undecidability" for an individual Turing machine halting problem. Thus, we cannot prove the conjecture , but we can state it, and this statement isn't "trivially false" in the constructive sense.

It's my opinion, perhaps it's wrong... Eugepros (talk) 10:18, 12 August 2010 (UTC)[reply]

I would say it is definitely wrong, for reasons already given. Rádo & Lin clearly explain that the conjecture could not as-yet be precisely stated, because it involves a concept that could not as-yet be properly formalized. In particular, for a given value of n, their concept "Σ(n) is not effectively calculable" is evidently not formalized merely by . Note that constructive logic, the BHK interpretation, and Kleene's realizability interpretation were all well-developed at the time, and it was Kleene who provided Rádo & Lin with the information that their concept could not as-yet be properly formulated.
r.e.s. (talk) 17:49, 12 August 2010 (UTC)[reply]

Start state

[edit]

I cannot find any mention of the initial state (the start state) with which "running the machine" has to begin with. Or am I just blind?
H.Marxen (talk) 16:35, 26 August 2010 (UTC)[reply]

It seems to have been an omission. I've made it explicit, and have also given a shot at improving some of the wording in this section ("The busy beaver game").
r.e.s. (talk) 17:48, 27 August 2010 (UTC)[reply]
Nice rewording. — sligocki (talk) 14:21, 28 August 2010 (UTC)[reply]

Σ(2,3) and S(2,3)

[edit]

Hello I've seen recently that an edit stating that Σ(2,3) = 9 and S(2,3) = 38 has been reversed since there is "no published proof". However, in Pascal Michel's website there is a reference to Lafitte and Papazian 2007 as a proof accessible at http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.104.3021&rep=rep1&type=pdf#page=231 Is this considered as unconvincing or not published ? Thanks - Yves —Preceding unsigned comment added by 193.49.219.185 (talk) 09:35, 3 September 2010 (UTC)[reply]

Thanks for the reference! Yes, I take it as a published proof, and I have noted User:Sligocki, who did the revert. May be he now wants to undo the revert. --H.Marxen (talk) 17:20, 3 September 2010 (UTC)[reply]
Wait a minute... I have been too fast: the case is not yet completely settled, IMHO. I'm going to follow up with an explanation. --H.Marxen (talk) 12:09, 4 September 2010 (UTC)[reply]

I have a concrete doubt, and a general problem with the presentation of a proof for some Σ or S value. I'll start with my concrete doubt:

  • On page 222, section 3.1, Lafitte and Papazian (L&F for short) say: "Furthermore, we take the first transition (State A on 0) as writing 1, going right and entering state B."
  • The problem is with "writing 1": that is ok for computation of Σ, since one can easily show, that omission of all "writing 0" machines will not all with the maximal Σ value.
  • For computation of S, I do not know of such a proof. I can show, that the maximal steps from writing-0 TMs is at most (n-1) larger than the maximal steps from writing-1 TMs, but thats it.
  • Hence, I have some doubt that there might exist a writing-0 TM, which L&P did omit in their computations, which does 38+(2-1) = 39 steps and leaves less than 9 blanks.

The more general problem I have is the question: How much detail shall be published as to be accepted as a final proof for a special case of Σ and/or S?

  • For a mathematical proof (that is, where we are) traditionally one has to give enough detail for the knowledgable reader, that he can re-think that proof, and can completely convince himself of the correctness.
  • Such a proof would be much too large in this case. It would not make sense to publish one.
  • The authors never did such write such a proof. They wrote some programs, that did (or at least easily could) construct such a proof. On page 225 L&P say: "Our program can output the long proof ...". Fine, but:
  • As all other readers of their paper, I do not have access to such an output, not do I have access to the program(s) L&P used for this.

Now, am I convinced? Partly, but not completely. Have I checked the proof? Sorry, no. I could not do that, except... I write my own version of the program(s) they have used.

  • That sounds "fair". They did work hard for it, and maybe the program was never ment to be published (is not nice or cleaned up inside).
  • That could be a good idea: a second implementation is a much better check, than proof reading just one implementation.
  • They have given quite some information about the methods used by the program for enumeration of TMs.
  • For the detection of non-halting TMs they do not give implementation details, but rather a detailed classification, like "there are X TMs that do Y with a period of Z steps." That can be considered an implementation hint, and can be used to check correctness of a second implementation.

As a matter of fact, I have not (yet) written such a second implementation.

Now, I'm not really sure how to judge the L&P paper. Brady in 1983 gave a lot more detail in order to settle (4,2), but that does not strictly imply, everybody else must do the same. I'm not even sure whether that bulk of detail given by Brady really makes a difference, since I'm not sure, whether any reader really can be sure to have checked all of them. How much of a burden can the author place on his readers?

Any suggestions or insights?
--H.Marxen (talk) 12:51, 4 September 2010 (UTC)[reply]

Following up myself... Obviously I still have to learn a lot. Meanwhile I have learned about the way to construct this encyclopedia, e.g. WP:IRS. I seem to have mixed up the work of a scientist with that of a wikipedia author. The latter is not considered to judge the content of scientific publications, but rather to judge their reliability.

In this case we have a conference paper written by two mathematicians. It looks like a quite normal primary source. No hints for doubt. It is published and as reliable as most other publications. There is only 1 citation by a later work of the first author. According to wikipedia guidelines a secondary source would be preferred, but the topic "busy beavers" is so small (in number of publications), that there does not exist much secondary publications about this topic at all, and waiting for more citations or even for secondary publications would mean to drop the topic from wikipedia, at least in parts.

Up to now other primary sources have been used as base for this article, and following that practice (which I consider to be ok) we should accept the claims about (2,3), add the paper to the list of references, and update the result tables.

Since I still feel a bit confused, I'd like some feedback from people with more experience as a wikipedia author, before I go ahead and do the change.
H.Marxen (talk) 01:25, 6 September 2010 (UTC)[reply]

Hello Heiner, First of all I want to thank you for your expert analysis, I didn't realize the degree of complexity of such proofs. Being much more a reader than an editor in Wikipedia (only occasionally, often for minor changes, I still don't have an account), I can only state my opinion on this particular question. For Σ(2,3) and S(2,3) only this source (L&P) is published, and seeing your analysis, I can guess the likely reason why Sligocki made the revert: according to Pascal Michel's website he found an independent proof but this is yet to be published; thus, he has an expert opinion on the degree of confidence of the proof claimed by LP, and is not convinced until he is able to publish himself as an independent confirmation. Therefore the situation is this: Σ(2,3) and S(2,3) have been claimed to be equal to 9 and 38 according to L&P but community consensus (including yourself) is yet to be reached. I would suggest to change the table, but with this caveat being clearly visible. I would be interested to see Sligocki's opinion on this. - Yves —Preceding unsigned comment added by 193.49.219.185 (talk) 15:11, 6 September 2010 (UTC)[reply]

Hi everybody, sorry to be so late to the game. Like Heiner, I had not heard about Lafitte and Papazian's paper before and only heard about claimed unpublished proofs that the class of 2-state 3-symbol machines was computable. In fact, my father and I have categorized all tree-normal-form 2x3 TMs as well, although we would not be confident enough about them to publish a paper. I'll give a look over the paper, in the mean time, if Heiner or Pascal think this proof holds water, I feel free to add it back. Likewise, I completely agree that we could add a note saying that a proof had been submitted, although not verified. Happy busy beavering, — sligocki (talk) 17:24, 6 September 2010 (UTC)[reply]
Hello (I'm the same as 193.49.219.185), thank you Shawn for your intervention. Since I'm not specialist in computer science (my domain is materials science) I don't know exactly how peer review proceeds for conference papers such as L&P's. However it can be said that: (i) L&P presented their proof publicly, without being refuted, (ii) the proceedings paper has been reviewed and is thus likely free of gross methodological errors, (iii) the paper hasn't been refuted since then (2007), and (iv) it has been cited by P. Michel as a proof; P. Michel very likely read it thoroughly without finding major objections. Thus, the proof holds water to this extent. But, just as another domain I'm interested in (discovery of superheavy nuclei), though valuable, this proof is still a proposition awaiting independent confirmation. Because of its inherent complexity, of the scarcity of specialists and the lack of specialist time, AFAIK confirmation could take a few years. A good example is given in D.Briggs' forum on finishing the (5,2) case: the resolution of the 43 holdouts is just the first step of a proof, necessary for reducing the uncertainty level (as of today, one or more of these holdouts could still finally stop after billions or trillions of steps, exploding the present Σ(5,2)/S(5,2) values) but not sufficient for firmly establishing a proof which complexity level is surely much higher than for the (2,3) case. (btw I would suggest to add D.Briggs' forum as a link). - Yves —Preceding unsigned comment added by 77.196.150.149 (talk) 21:24, 9 September 2010 (UTC)[reply]
I still have not had a chance to read the paper (always busy), but it does sound like it has gone through some reasonable review. And as I mentioned, I can confirm the facts (up to a small possible error of only enumerating tree-normal form machines). I've reverted my earlier edit to make these exact values until anyone voices objection. Now I must congratulate Lafitte and Papazian :) Cheers, — sligocki (talk) 05:03, 10 September 2010 (UTC)[reply]
Thanks again. As soon as I have time I intend to (i) add the L&P reference and the caveat, and (ii) add a link to the Briggs' forum for the (5,2) case - Yves —Preceding unsigned comment added by 193.49.219.185 (talk) 09:07, 10 September 2010 (UTC)[reply]
Additions done. Hoping the formulation about the caveat isn't heavy or misleading - Yves —Preceding unsigned comment added by 77.197.251.244 (talk) 13:48, 26 September 2010 (UTC)[reply]

One more thougt about effective calculability of individual integers like

[edit]

Assertions like "there exists program PRINT " look very unconvincing for me, because they were derived from unobvious axiomatics. But I guess, that there is another way to prove that integers like are effectively calculable. If I'm wrong, please correct. We should express the sentence " halts" (for any given Turing machine ) in a formal language of arithmetic. I guess, that we don't need the multiplication for this purpose, so the language of Presburger arithmetic should be sufficient. But Presburger arithmetic is complete and decidable theory. The last means, that there exists an algorithm which decides whether the sentence " halts" is true or false. Thus, we can effective calculate for any given .

Eugepros (talk) 12:33, 21 October 2010 (UTC)[reply]

You have proven that Presburger arithmetic is not powerful enough to describe the halting problem :) — sligocki (talk) 05:00, 27 October 2010 (UTC)[reply]

Hmmm, why? As far as I now, there is no a theorem about undecidability of the halting problem for some individual Turing machine. Alan Turing proved in 1936 only that a general algorithm to solve the halting problem for all possible program-input pairs cannot exist.

Eugepros (talk) 13:50, 28 October 2010 (UTC)[reply]

You are right. That is the basis for finding any Busy Beaver results. But you said "Thus, we can effective calculate for any given ". This would allow you to solve the general halting problem. Your assumption is that you can convert any sentence "M halts" into formal Presburger arithmetic. But there cannot be a general algorithm for doing that or it would violate the Halting problem.
Ah, but maybe you meant that for each n you would need a new and more ingenious method for converting it to the formal language? That could be possible, but remember, you have to check a lot of machines! Cheers, — sligocki (talk) 22:13, 30 October 2010 (UTC)[reply]

Yes, there cannot be a general algorithm to convert any sentence "M halts" into formal Presburger arithmetic. But maybe we can prove that "M halts" is convertable into formal language of Presburger arithmetic for any M? Such a proof doesn't mean constructing of general method for the conversion.

Eugepros (talk) 12:37, 11 November 2010 (UTC)[reply]

Hm these discussions tend to diverge from usefulness. But I can prove that every statement "M halts" is convertible into a very simple formal language for any M? This simple language is the language with two sentences, "true" and "false". Every M either halts or doesn't, therefore every statement "M halts" is either true or false :)
I suspect this is not what you were hoping for. I think the central problem is that you want a more general solution where the formal language statement resembles the original question. For example, it might be interesting to know if there was an algorithm to convert all 5-state, 2-symbol machines questions "does M halt" into a specific formal language, if so, perhaps we could solve BB(5, 2) algorithmically... — sligocki (talk) 08:10, 13 November 2010 (UTC)[reply]

I understand you. But problem is that proofs like "statement is convertible because it's either true or false" means nothing for me. :( I know, it's classical logic. But I cannot trust inference from nothing but law of excluded middle. The statement " halts" can obviously be translated into something like: "There exists the number , such that ='stop', where are states of the for every -th step". I can't see why the sequence couldn't be defined recursively, using formal language like the one of Presburger arithmetic... Or the conjecture, that functions are recursive for every , implies that , as the function of both and , is also recursive?

Eugepros (talk) 11:57, 13 November 2010 (UTC)[reply]

I've just added this section, but the wording is somewhat tricky. Presently, the conclusion ("in the context of ordinary mathematics, no specific number can be proved to be greater than Σ(10↑↑10)") is somewhat misleading, as it seems to imply that for n = Σ(10↑↑10), not even "n+1 > n" is "provable in ordinary mathematics". But the intended meaning is that in the given formal system, there is a formula φ(<n>) (where <n> is a unary notation for the number n) that expresses "n > Σ(10↑↑10)", and the conclusion is that no sentence of the form φ(<n>) is provable in that system.
r.e.s. (talk) 19:06, 16 November 2010 (UTC)[reply]

Statement about computability of subsets of Σ is messed up and misleading

[edit]

It says this:

"A noteworthy fact is that, theoretically, every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is (trivially) computable, even though the infinite sequence Σ is not computable (see computable function examples)."

In my most charitable interpretation of this sentence, it's saying something that's trivially true but perhaps misleading and not too related to the notion of the computability of the actual sequence of Σ. It's kind of like saying that "every digit in the decimal expansion of an uncomputable number is in isolation a computable number, because a 'digit' is a value from 0-9 and hence a natural number and the naturals are computable." This is true, but it's like this random factoid that can easily send someone in the wrong direction. It's misleading because it seems very much like it's saying that one can figure out, even aside from "practicality" restrictions, what the value of Σ(n) is for any n. This is false.

Using our decimal expansion/digit metaphor, it would be like saying that one can figure out WHICH POSITION in the expansion contains which of these computable entities we've called "digits," and that's just not true. If it were true, the number wouldn't be uncomputable at all. Aside from that, the fact that the concept of a "digit" itself, being a value from 0-9, is a computable number is basically irrelevant to what's being discussed here. And in the case of Σ, it's even more irrelevant; if every value of Σ(n) is a finite natural number, then we already know that values of Σ(n) are in the set of all computable numbers by definition. We already went over this when we defined Σ as a function mapping from N -> N. So there's no need to state that because the codomain of Σ is the set of natural numbers, it's also a subset of the set of computable numbers.

But then the next sentence after what's written above is this, which makes it seem like it's saying something that's just plain false (first sentence included again for reference:

"A noteworthy fact is that, theoretically, every finite sequence of Σ values, such as Σ(0), Σ(1), Σ(2), ..., Σ(n) for any given n, is (trivially) computable, even though the infinite sequence Σ is not computable (see computable function examples). Furthermore, for sufficiently small n, it is also practical to compute Σ(n)."

It now seems like it's saying that the problem with computing Σ(n) is that as n increases, the computation involved becomes so complex that it's not really feasible to figure out Σ(100) or something. But that's not really it. It's just not true that we can figure out what Σ(n) is for every choice of n -at all-, independent of any notion of "practicality," on a completely theoretical level, even if we had an infinite amount of time. It's only for some n that we can compute it at all, period.

I see there was some discussion on this before, but it doesn't look like it was resolved. I'm not sure how to change it yet though. I'll post this for now and maybe we can get a discussion on it. I'd like to remove these few sentences because it either seems misleading, unrelated, or false, depending on how I interpret this.

71.230.120.206 (talk) 09:09, 19 January 2012 (UTC)[reply]

I agree, there was too much focus on how the integers were in-fact computable. I've removed another paragraph that stressed this. How does it look now? Cheers, — sligocki (talk) 23:58, 12 February 2012 (UTC)[reply]

How do we know the value of BB(6)

[edit]

Even if we ran a Turing machine until the Heat death of the universe, we will get nowhere near when it halts.

Bubby33 (talk) 21:06, 18 April 2015 (UTC)[reply]

Further discussion

[edit]

The untitled comments below were taken from the top of the page by User:Negrulio on 28-12-2015 --Negrulio (talk) 16:03, 28 December 2015 (UTC) It would be nice to more specifically describe what the term "n-state machine" means. Whether it is a Turing Machine or an arbitrary n-state machine. marek 19:38, 23 February 2006 (UTC)[reply]


There is an UTM (2,22), so BB(22) would provide an answer to the Halting Problem:Lower bounds for universal Turing machines. So BB(n) can't be calculated for n>=22, which makes the application section useless. —Preceding unsigned comment added by Rantalaiho74 (talkcontribs) 16 October 2010

No, there is a 22 state UTM, but it requires the input to be specified on the tape. Busy beaver is run with a blank tape. With no input. In fact, if you look at the definition of the 22 state UTM, and try running it on a blank tape, the behavior will probably be pretty simple because a blank tape probably doesn't encode a very interesting TM input. Please stop editing this page unless you can cite a reliable source. Cheers, — sligocki (talk) 06:36, 17 October 2010 (UTC)[reply]

Proof of the fact "S(n) grows faster than any computable function F(n)" follows from the properties of the composition Create_n0|Double|EvalF|Clear where n0 is the size in states of the machine Double|EvalF|Clear. I think that the current proof in the begining of the article is much complex (it uses log2(n) and UTM properties) and should be changed. Skelet 08:26, 27 Oct 2004 (UTC)

In the same manner proof of "Σ(n) grows faster than any computable function F(n)" follows from the properties of the composition Create_n0|Double|EvalF|Increment. Skelet 08:31, 27 Oct 2004 (UTC)


We can solve the halting problem for TMs up to any fixed size. What's the corresponding principle here in terms of real programming languages? Could we calculate S(n) up to a fixed n by running a particular program (a big TM enumerator perhaps?) of larger size? Do we know how much larger than n the machine to calculate S(n) must be? Lunkwill 29 June 2005 07:28 (UTC)

Nope. Put simply, if we knew how big a Turing machine we needed to calculate S(n), then we'd be able to calculate S(n). --Ihope127 16:59, 2 April 2006 (UTC)[reply]

Removal of "Non-reliable" Sources

[edit]

This edit removed a "non-reliable" source from the article: https://en.wikipedia.org/w/index.php?title=Busy_beaver&diff=756743646

I don't think this is justified. I'm not saying that Wikia is a reliable source. But, what I am saying is that to dismiss this based on the reliability of the source is a genetic fallacy, because the validity of a deductive argument is independent of the reliability of the source of the argument. In other words, a deductive proof is a proof, regardless of where a proof comes from or is published. However, at the same time I do understand that Wikia isn't a WP:RS, and accordingly shouldn't be included in Wikipedia. But, since the removal of this source reduces the quality of the article by adding an uncited non-trivial claim, and it removes a place where people can learn more about what is being said, I think this calls for WP:IAR. IWillBuildTheRoads (talk) 20:07, 8 January 2017 (UTC)[reply]

Maximum Tape Length

[edit]

What is the name of the function that counts the maximum right shift, R(n)? How does it grow proportionally? 91.66.15.241 (talk) 21:08, 13 March 2017 (UTC)[reply]

Less moves for 3 state Busy Beaver

[edit]

Very nice article. The 3 state BB in section examples takes 14 steps. The following machine takes 12 steps and always writes a 1.

A B C
0 1RC 1RA 1LB
1 1RA 1LC 1 H

Is this worth mentioning? Do with my remark as you wish.

Jacob.Koot (talk) 18:38, 11 April 2017 (UTC)[reply]

Below is the table of moves. ((x ...) (y z ...)) represents the tape (x ... y z ...) with the tape-head at y.

.......................................... initial tape: (() (0))
move 1, state A -> C, symbol 0 -> 1, move: R, new tape: ((1) (0))
move 2, state C -> B, symbol 0 -> 1, move: L, new tape: (() (1 1))
move 3, state B -> C, symbol 1 -> 1, move: L, new tape: (() (0 1 1))
move 4, state C -> B, symbol 0 -> 1, move: L, new tape: (() (0 1 1 1))
move 5, state B -> A, symbol 0 -> 1, move: R, new tape: ((1) (1 1 1))
move 6, state A -> A, symbol 1 -> 1, move: R, new tape: ((1 1) (1 1))
move 7, state A -> A, symbol 1 -> 1, move: R, new tape: ((1 1 1) (1))
move 8, state A -> A, symbol 1 -> 1, move: R, new tape: ((1 1 1 1) (0))
move 9, state A -> C, symbol 0 -> 1, move: R, new tape: ((1 1 1 1 1) (0))
move 10, state C -> B, symbol 0 -> 1, move: L, new tape: ((1 1 1 1) (1 1))
move 11, state B -> C, symbol 1 -> 1, move: L, new tape: ((1 1 1) (1 1 1))
move 12, state C -> H, symbol 1 -> 1, move: , new tape: ((1 1 1) (1 1 1))

(I improved the layout --H.Marxen (talk) 18:58, 3 May 2017 (UTC))[reply]

Well, normal busy beaver metrics praise the larger numbers.
To achieve the same (large) number of tape marks with fewer steps -logically- should be interesting. Hence, yes, an expert could find this worth a study. Otherwise... such a study has not yet been done, AFAIK, so we cannot cite such work. --H.Marxen (talk) 18:58, 3 May 2017 (UTC)[reply]

Thanks for improving the layout. I tried it myself, but did not know how to do it. I agree that unpublished work must not be included. Nevertheless, my example is a proof by itself and could be interpreted as a publication via Wikipedia. It seems that Wikipedia does not accept publications made via Wikipedia? Jacob.Koot (talk) 16:20, 12 May 2017 (UTC)[reply]

https://en.wikipedia.org/wiki/Turing_machine_examples#3-state_Busy_Beaver shows a busy beaver that takes 12 steps and always writes a 1. It is not exactly the same as my one, but works all the same. The reference is: "derived from Peterson (1988) page 198, Figure 7.15." I think we can include Peterson's example. Jacob.Koot (talk) 17:24, 21 June 2017 (UTC)[reply]

Requirement of exact number of steps

[edit]

The article mentions that "a statement of the exact number of steps it takes to reach the Halt state" is necessary because without it "the problem of verifying every potential entry is undecidable", citing the halting problem. However, doesn't the halting problem is that of determining halting of an arbitrary machine on ARBITRARY INPUT, and the latter requirement is key to the proof?

If there is a generalization of the halting theorem that prohibits creating an algorithm that for one specific input would predict whether an arbitrary machine would stop, such a generalization should definitely be mentioned on the corresponding "halting problem" page, and the current article should link specifically to that generalization. Otherwise the current article needs to reflect that the "exact number of steps" is only required because we don't know an algorithm to check, and do not even know WHETHER such an algorithm could exist. — Preceding unsigned comment added by 104.53.222.39 (talk) 01:37, 2 June 2020 (UTC)[reply]

I think the halting problem for any arbitrary input can be reduced to the halting problem for a blank-tape input (or any other specific input) by just taking your arbitrary-input machine, and turning into a larger blank-tape machine that just writes the input on the tape before starting. So solving the halting problem for the blank-tape machines solves it for all of them. Mrfoogles (talk) 22:55, 17 July 2024 (UTC)[reply]

C code (4-state, 2-symbol busy beaver)

[edit]
#include <stdio.h>

#define STATE_N (4)
#define SYMBOL_N (2)
#define TAPE_N (16)

int move_tb[SYMBOL_N][STATE_N] =
    {
        { +1, -1, +1, +1 },
        { -1, -1, -1, +1 }
    };

int state_tb[SYMBOL_N][STATE_N] =
    {
        { 1, 0, -1, 3 },
        { 1, 2,  3, 0 }
    };

int write_tb[SYMBOL_N][STATE_N] =
    {
        { 1, 1, 1, 1 },
        { 1, 0, 1, 0 }
    };

int tape[TAPE_N];

int main(void)
{
    int header = 11;
    int state = 0;
    int step = 0;

    while (1)
    {
        printf("%4d  ", step);
        for (int i = 0; i < TAPE_N; ++i)
            printf("%c%c  ", i == header ? "hABCD"[1 + state] : ' ', "_1"[tape[i]]);
        printf("\n");

        if (state == -1)
            break;

        int read = tape[header];
        tape[header] = write_tb[read][state];
        header += move_tb[read][state];
        state = state_tb[read][state];

        if (header == -1 || header == TAPE_N)
        {
            printf("ERROR: out of tape (header=%d)\n", header);
            break;
        }

        step++;
    }

    return 0;
}

--MkMkMod (talk) 07:53, 17 July 2020 (UTC)[reply]

Maximum shifts function S(n)

[edit]

I suspect, that S(n) was formerly noncomputable, but in 2013 when the first-order set theory (which is Rayo(n)) was defined, S(n) became computable.

In this video, Carbrickscity said the reason, why Rayo's number is noncomputable. The reason was: "No one will ever define a number with a googol symbols.". The problem seems to be the number of symbols. There is not enough space in the observable universe for a googol symbols. Emk has shown, that Rayo(7901) > S(265536−1), where S(n) is the maximum shifts function. Only 7901 symbols (which can be easily written down) and already 265536−1 states. This is why I suspect, that S(n) became computable, after Rayo(n) was defined.

The same symbol-problem appears on proving TREE(3) in strictly finite mathematics. This would need 2^^1000 symbols. No one can ever finish such a prove, because there is not enough space in the observable universe for this many symbols. This reason was originally written in this video. So, User:84.154.72.51 wrote this idea for the exact value (or at least a really good bound) of TREE(3). The idea is the first-order set theory for TREE(3) in symbols. The first-order set theory is Rayo(n), where n is the number of symbols. Here, it has been shown, that Σ(2000) > Loader's number ≫ TREE(3). Here, it has been shown, that S(n) ≥ Σ(n) for all n. And of course, 265536−1 ≫ 2000. So, the first-order set theory would reduce the "required number of symbols for TREE(3)" from 2^^1000 to less than 7901. — Preceding unsigned comment added by 80.142.18.145 (talk) 20:41, 25 October 2020 (UTC)[reply]

Σ(17) > Graham's number and other comparisons

[edit]

It has been shown, that Σ(17) > Graham's number.

What kind of Σ(n) would be about as big as TREE(3), SSCG(3), SCG(13), Loader's number, Rayo's number and Fish number 7? — Preceding unsigned comment added by 84.151.253.231 (talkcontribs)

Error in Examples

[edit]

The 4-state, 2-symbol busy beaver Examples and Visualizations do not agree. The Examples has 1RH for state C symbol 0, and the Visualizations has 0RH for state C symbol 0. The Example states that the 4-state, 2-symbol busy beaver produces 13 ones in 107 steps and it says "see image." The Visualizations only produces 12 ones when it halts. This should be corrected. — Preceding unsigned comment added by 50.206.176.154 (talk) 04:17, 3 February 2021 (UTC)[reply]

@OrdinaryArtery: Could also be a chance to vectorize that diagram! ~~Ebe123~~ → report 17:11, 21 April 2021 (UTC)[reply]
The visualization uses the state prior to Halt to represent the Halt state. This works for the one-through-three-state diagrams because the state prior to Halt has symbol=1. Unfortunately, this shorthand causes a minor discrepancy in the four-state diagram. I've added a note to the caption to clarify this. 216.243.58.249 (talk) 04:45, 25 September 2022 (UTC)[reply]
PS: If the Halt state was instead represented by a black circle with no protrusion, the diagrams could be updated to remove this discrepancy. 216.243.58.249 (talk) 04:48, 25 September 2022 (UTC)[reply]

Did Σ(n) and S(n) become computable?

[edit]

Carbrickscity's reason, why Rayo's number is uncomputable: “No one will ever define a number with a googol symbols.

Using the latest bounds, it can be determined, that Rayo(7339) > S(265536 - 1).

Only 7340 symbols in the first-order set theory and already 265536 - 1 states in the maximum shifts function, so I guess, Σ(n) and S(n) became computable. Also, Rayo(n) became 84.154.72.51's idea for TREE(3) in symbols.

The first-order set theory only needs a few thousand symbols for TREE(3). So, if we use the first-order set theory for TREE(3) in symbols, we can find out the actual exact value of TREE(3). 84.154.65.218 (talk) 09:43, 10 February 2023 (UTC)[reply]

The bound on Rayo(7339) is found by defining what the maximum shift function is in FOST, as well as the number 2^65536 - 1. In other words this in no way proves it computable, because you've just rewritten the same definition in a different language. 86.3.78.253 (talk) 07:34, 14 September 2023 (UTC)[reply]
"Computable" is a property of a function, and a computable function is one which is computed by some Turing machine. "Uncomputable number" is an unfortunate piece of terminology which seems to have originated from the site Googology Wiki, it is unfortunate since computability/uncomputability is a property of a function, not a number. It is easy to prove by contradiction that the busy beaver function cannot be a computable function (if you assume for a contradiction that it were computable, the halting problem would be solvable by running an n-state Turing machine for BB(n) steps, returning "halt" if it halted by that point, and "non-halt" if it did not.)
Numbers outputted form uncomputable functions are not necessarily larger than numbers outputted from computable functions either, which is another reason that "uncomputable number" vs. "computable number" are not good pieces of terminology for cataloguing large numbers. In fact there are uncomputable functions which are eventually dominated by computable ones, for example if f(x) is "0 if the xth Turing machine halts and 1 otherwise", and g(x) = x^2, then g, a comptuable function, eventually dominates f, an uncomputable function. C7XWiki (talk) 04:13, 11 April 2024 (UTC)[reply]

Proposal: Change of notation

[edit]

What this article calls Σ(N) (aka "Rado's sigma function") seems to be called BB(N) in many sources: see for example [1], [2], [3], [4]. "BB" seems to be easier to remember than "Σ"; if I'm right that they are the same thing, I propose we change the usage here to replace it. — The Anome (talk) 20:54, 19 October 2023 (UTC)[reply]

Multiple of those sources you listed use S(n) as BB(n), not Σ(n). I think we should use BB for S(n), not the other way around, if we had to pick, but I'm not sure there is overall agreement on what BB(n) means. Mrfoogles (talk) 18:56, 3 July 2024 (UTC)[reply]

g(0) vs g(1) for Graham's number

[edit]

I was the editor who changed it, based on the then current version of the article Graham's number. I don't have a dog in this fight, I just wanted things to be consistent.

Since then, there is an ongoing dispute there over whether to zero index or not. Seems to me that the best approach for the editors at this page to wait for that dispute to be resolved and accept that verdict. If anybody here wants to get involved, the talk page for Graham's number is easy to find.

For now, I support the current revision that reverts my edit from two months ago. Pending whatever the editors at the topic article decide. Mr. Swordfish (talk) 02:03, 13 November 2023 (UTC)[reply]

I suggest to remove the references here, and just to write
In 1964 Milton Green developed ...
in section Busy_beaver#Known_values_for_Σ_and_S, and
Likewise, we know that and S(17) > Σ(17) > G, where G is Graham's number.
in section Busy_beaver#Applications; I also suggest to remove the subtle distinction between "gigantic" and "enormous". - Jochen Burghardt (talk) 09:40, 13 November 2023 (UTC)[reply]

3 State 4 symbol Busy Beaver

[edit]

https://www.sligocki.com//2024/05/22/bb-3-4-a14.html

describes a Busy Beaver candidate which puts more than Ackermann(14) symbols on the tape, in fact exactly non-zero symbols on the tape NadVolum (talk) 11:41, 31 May 2024 (UTC)[reply]

New results

[edit]

BB(5) = 47,176,870. [5] --jpgordon𝄢𝄆𝄐𝄇 18:28, 2 July 2024 (UTC)[reply]

I've put that in but not sure the Σ(5) value isn't higher for some other one. NadVolum (talk) 19:48, 2 July 2024 (UTC)[reply]
I also took out the >= sign in the BB(2,4) case as it is now known that is the busy beaver value. A higher value of BB(2,5) is know but I don't believe any more values will ever be proven to be the busy beaver value. They all have cases which probably don't stop but we can't prove that because that would involve solving a hard problem like the Collatz one. NadVolum (talk) 23:42, 2 July 2024 (UTC)[reply]

Article technicality improvement

[edit]

I think it's important to:

  1. Reword the weasel 'disambiguation-like' language'.
  2. Clearly explain how a busy-beaver program works sooner rather than later

Hence, for review:

The busy beaver game is a theoretical computer science problem to find a terminating program with a given sophistication that produces the most output (writing into the 'memory) possible.
On each move in the program, the current position is read and, combined with the current program state, produces what to write to the current position (the current value can be overwritten by itself) and which program state to run next.
Since an endlessly looping program producing infinite output or running for infinite time is easily conceived, such programs are excluded from the game.
The problem can also phrased as finding the machine that runs for the longest time - both games are similarly difficult.
The busy beavers are implemented as a halting Turing machine with an alphabet of {0,1} which writes the most 1s on the tape, using only a given set of states. As such the rules for the 2-state game are as follows:
  • the machine must have at most two states in addition to the halting state, and
  • the tape initially contains 0s only.
Creating an attempt for the longest busy beaver game means conceiving a transition table aiming for the longest output of 1s on the tape while making sure the machine will halt eventually.
An nth busy beaver, BB-n or simply "busy beaver" is a Turing machine that wins the n-state busy beaver game. That is, it attains the largest number of 1s among all other possible n-state competing Turing machines. The BB-2 Turing machine, for instance, achieves four 1s in six steps.
Calculating the longest possible valid number of moves for more than a few states is incredibly time-consuming. Recent work has found the value for BB-5 - 47,176,870 moves.[citation needed]
Deciding the number of 1s, or the running time, of any size Busy Beaver is incomputable. This has implications in computability theory, the halting problem, and complexity theory. The concept was first introduced by Tibor Radó in his 1962 paper, "On Non-Computable Functions"[citation needed]

2A00:23C8:CA00:3C01:2D8E:90F5:70D1:1973 (talk) 21:30, 2 July 2024 (UTC)[reply]

I don't know that we're going to do better than what's in the recent article about BB(5):
Turing machines perform computations by reading and writing 0s and 1s on an infinite tape divided into square cells, using a “head” that operates on one cell at a time. Every machine has a unique set of rules that governs its behavior.
Each of these rules specifies what the head should do when it moves into a new cell, depending on whether it encounters a 0 or a 1 already there. This means a Turing machine’s instructions can be summarized in a table with one row for each rule and two columns (one for when the head encounters a 0 and the other for when it encounters a 1). One rule might be, “If you read a 0, replace it with a 1, move one step to the right, and consult rule C,” in the first column, and “if you read a 1, leave it unchanged, move one step to the left, and consult rule A,” in the second. This is what all the rules look like, except for one special rule that tells the machine when to stop running.
Of course, we can't plagiarize this, but I think we could quote it, or rephrase it. In particular, giving concrete examples of specific rules instead of just abstractly describing a function of two inputs and three outputs will be more easily understandable to the layperson. Anyway, it's a place to start. Mr. Swordfish (talk) 21:53, 2 July 2024 (UTC)[reply]
Suggest adding the following Example, with surrounding text from the article included for context:
The n-state busy beaver game (or BB-n game), introduced in Tibor Radó's 1962 paper, involves a class of Turing machines, each member of which is required to meet the following design specifications:
  • The machine has n "operational" states plus a Halt state, where n is a positive integer, and one of the n states is distinguished as the starting state. (Typically, the states are labelled by 1, 2, ..., n, with state 1 as the starting state, or by A, B, C, ..., with state A as the starting state.)
  • The machine uses a single two-way infinite (or unbounded) tape.
  • The tape alphabet is {0, 1}, with 0 serving as the blank symbol.
  • The machine's transition function takes two inputs:
  1. the current non-Halt state,
  2. the symbol in the current tape cell,
and produces three outputs:
  1. a symbol to write over the symbol in the current tape cell (it may be the same symbol as the symbol overwritten),
  2. a direction to move (left or right; that is, shift to the tape cell one place to the left or right of the current cell), and
  3. a state to transition into (which may be the Halt state).
Example
The rules for state 1 might be:
If the current symbol is 0, write a 1, move one space to the left, and transition to state 2.
If the current symbol is 1, write a 0, move one space to the right, and transition to state 3.
There are thus (4n + 4)2n n-state Turing machines meeting this definition because the general form of the formula is (symbols × directions × (states + 1))(symbols × states).
The transition function may be seen as a finite table of 5-tuples, each of the form
(current state, current symbol, symbol to write, direction of shift, next state).
Mr. Swordfish (talk) 16:58, 3 July 2024 (UTC)[reply]
I did not realize this had just been added and have just cut down the section a bit: it's very helpful to have a clear definition but the large amount of detail was a bit intimidating, even though it was simplified (transition functions are a bit non-elementary). Hopefully this should give a clear definition without stopping people from scrolling past it if necessary (though of course the rest of the article is even more technical). Mrfoogles (talk) 18:53, 3 July 2024 (UTC)[reply]
What do you think about adding the Example as above? My experience teaching math to non-mathematicians is that a concrete example goes a long way towards explaining an abstract definition. Mr. Swordfish (talk) 19:57, 3 July 2024 (UTC)[reply]
I think that's a good idea, but maybe with a complete machine? Having just the first state is helpful, but I think it would be better to have a full machine, maybe with a short summary of what it does. The only problem is we don't want to divert too long from Busy Beavers to Turing machines, and the section is pretty long already. A 1-state Turing machine would be quicker to write (only 2 lines, in comparison for 4 needed for a 2-state Turing machine), but wouldn't illustrate the changing of the states.
It would be fun to have one of the busy beavers as an example, but the 1-state is probably too simple and the 2-state overly complicated.
Maybe:
Example
The rules for a 1-state Turing machine might be:
  • In state 1, if the current symbol is 0, write a 1, move one space to the right, and transition to state 1
  • In state 1, if the current symbol is 1, write a 0, move one space to the right, and transition to HALT
This Turing machine would move to the right, swapping the value of all the bits it passes. Since the starting tape is all 0s, it would make an unending string of ones. This machine would not be a busy beaver contender because it runs forever on a blank tape. A machine with more states might transition to a state with different behaviour, rather than halting, when it hits a 1. Mrfoogles (talk) 19:01, 4 July 2024 (UTC)[reply]
I think a complete machine makes it more concrete, and you can also summarize it, so people try to see the behaviour in the machine, rather than having to try to deduce it from the source code. Mrfoogles (talk) 19:02, 4 July 2024 (UTC)[reply]

Better source needed for table of results

[edit]

Currently, there is no source for all of the more-than-2-symbol results. It's unclear if such a source even exists: probably someone keeps and index somewhere, but it's unlikely to exist in a non-self-published source. Should we just cite blogs? Or should the rest of the table be deleted? Mrfoogles (talk) 21:19, 4 July 2024 (UTC)[reply]

Also, should the technical tag be removed? Mrfoogles (talk) 22:12, 5 July 2024 (UTC)[reply]

S(5) Solved

[edit]

The following article claims that S(5) has been solved.

Amateur Mathematicians Find Fifth ‘Busy Beaver’ Turing Machine | Quanta Magazine BAbdulBaki (talk) 12:36, 6 July 2024 (UTC)[reply]

It has been. It should be in the article: I know it is in some places. If the article claims it's not, it should be updated. Mrfoogles (talk) 08:01, 8 July 2024 (UTC)[reply]

"uncomputability" - "counterexplanation" retry with an improved description

[edit]

previous attempt: https://en.wikipedia.org/w/index.php?title=Talk%3ABusy_beaver&diff=1138560813&oldid=1112203473

There are 2 new functions.: num(n) and space(n)

So, here is an improved description, why num(n), Σ(n), space(n) and S(n) could have become computable.

In How big is Rayo's Number 拉約數, Carbrickscity showed and said the hurdles, why Rayo's number is uncomputable.

Rayo(n) is defined as: "the smallest natural number greater than all natural numbers named by an expression in the language of first-order set theory with n symbols or less". This means, the smallest number, that requires at least n + 1 symbols in the first-order set theory.

Rayo's number is Rayo(10100). In the brackets, you can see the hurdles.

No one will ever define a number with 10100 symbols.

number of seconds since the big bang ≈ 1017 seconds

Planck time ≈ 10-44 seconds

number of Planck times since the big bang ≈ 1061 < 10100

number of atoms/particles in the observable universe ≈ 1080 < 10100

But, one of the lower bounds for one of the smaller Rayo(n) values looks like, num(n), Σ(n), space(n) and S(n) could have become computable.

Emk has shown, that Rayo(7901) > S(265536 - 1), where S(n) is the maximum shifts function.

265536 - 1 states in the maximum shifts function is a lot of states and it only took 7902 symbols in the first-order set theory. Because 7902 symbols can easily be written down, it looks like, Emk could have disproven the uncomputability of num(n), Σ(n), space(n) and S(n) with his Rayo(7901) > S(265536 - 1). And, this is not even the latest bound.

Using the latest bounds, it can be determined, that Rayo(7339) > S(265536 - 1).

265536 - 1 states in the maximum shifts function is a lot of states and it only took 7340 symbols in the first-order set theory. Because 7340 symbols can easily be written down, I guess, num(n), Σ(n), space(n) and S(n) became computable.

Also, where is the exact value of TREE(3)? With a strong symbol function, like the first-order set theory, Rayo(n), it should be possible to find out the exact value of TREE(3). The previous symbol function used for TREE(3) was strictly finite mathematics, but it takes 2↑↑1000 symbols for TREE(3). The first-order set theory, Rayo(n), is a much stronger symbol function than strictly finite mathematics. Since Rayo(7339) > S(265536 - 1), S(n) ≥ space(n) ≥ Σ(n), Σ(2000) ≥ Loader's number, 265536 - 1 >> 2000 and Loader's number >> SCG(13) > SSCG(3) >> TREETREE(3)(3), the first-order set theory, Rayo(n), would reduce the number of symbols for TREE(3) from 2↑↑1000 to a few thousands. So, if we use the first-order set theory, Rayo(n), for TREE(3) in symbols, we can find out the actual exact value of TREE(3). 94.31.89.138 (talk) 11:53, 5 October 2024 (UTC)[reply]

The problem here is that you’ve shown that Rayo(x) is bigger than S(y), but there are two problems. First, Rayo(x) is also uncomputable; you’ve just reduced uncomputability to uncomputability. Second, to make a function computable, you can’t just upper-bound a single value (e.g. I could say that S(5) < the constant function 100 million evaluated at 5), but that does not allow me to upper bound the S(n) function in general. So this does not place an upper bound on S(n), and even if it did it would be useless because Rayo(n) is uncomputable, as well as because S(n) can be (not even with too much difficulty) proven to be impossible to compute in general anyways, so a computable upper bound has been proven to not exist. Mrfoogles (talk) 15:18, 5 October 2024 (UTC)[reply]
Also, just because Rayo(7339) > TREE^{TREE(3)}(3) doesn't mean you can calculate the exact value of Tree(3); for example just because I know someone is under the age of 200 doesn't mean I know exactly what age they are. Not that we can calculate Rayo(7339) to the best of my knowledge anyways. It is neat that Rayo(7339) > S(2^65536 - 1), though. Mrfoogles (talk) 16:32, 5 October 2024 (UTC)[reply]