Well, Dorin interprets Rasmussen's paper as claiming "impossible". That is not what Rasmussen says. My impression is that he (R) is saying something similar to a high-level description like in C++ is easier for programmers to understand and work with/predict than machine language, not that C++ and machine language have a different range of possible programs. But this analogy is not exactly on target. Nor do I think Rasmussen means that discussing the gold standard, paper money and economics, doesn't require understanding photosyntheis to make trees to make paper or understanding the atomic weight of gold. His idea seems more concrete than making an abstraction level distinction. I meant to give links to my research in this topic for other people to follow in case they were interested. I notice I made a typo: "I became intrigued with this topic and *proved my notes, for those interested, below." I meant provide, not *proved. I didn't attempt to prove that the ideas presented by the various authors were consistent when I included those various urls in an early post. Regards, Stephen
http://www.**--****.com/ (1994) "The essential idea of moving up the hierarchy is that the symmetries assumed by the agent are broken by the data when reconstruction leads to an infinite model at some level of representation ... The key step to innovating a new model class is the discovery of new equivalence relationship. This interpretation provides a more elaborate definition of emergence: A process undergoes emergence if at some point the architecture of information processing has changed in such a way that a distinct and more powerful level of intrinsic computation has appeared that was not present in earlier conditions." SH: This definition by Crutchfield does not apparently violate Turing equivalence. So when Rasmussen advances the idea of designing a CA with more complex primitives it no more repudiates Turing equivalence than Crutchfields' description does. Regards, Stephen
Stephen Harris < XXXX@XXXXX.COM > wrote or quoted: A lot of words - do they mean anything? Conventionally, emergence is about the appearance of qualitatively new sorts of behaviour - and does not require that there's anything "more powerful" about them. -- __________ |im |yler http://www.**--****.com/ @XXXXX.COM Remove lock to reply.
Stephen Harris < XXXX@XXXXX.COM > wrote or quoted: [An Alan Dorin & Jon McCormack paper says:] [...] [...] Well, that was where I came in, wasn't it? The top of this post has the "impossible" quote. I noticed it was bogus, tracked down the reference - and found that it's thesis was: ``Ansatz. Given an appropriate simulation framework, an appropriate increase of the object complexity of the primitives is necessary and sufficient for generation of successively higher-order emergent properties through aggregation.'' If that paper doen't *actually* mean that an increase in the complexity of the base units is necessary for the generation of successively higher-order emergent structures, it is /extremely/ easy for me to understand how those citing the paper /thought/ that it was making that claim. Anyway, the idea under discussion: ``that it may be impossible to extend the levels in a hierarchy, without adding to the complexity of the base units'' ...has turned out to be a bogus one - as I think you agree. Whether the Rasmussen paper cited as the source of the notion is being deliberately obtuse, has suffered in translation, was misleading by accident, or was simply misinterpreted, I'll avoid going into further. -- __________ |im |yler http://www.**--****.com/ @XXXXX.COM Remove lock to reply.
1.state of the art on dynamical hierarchies Ansatz
2.state of the art on dynamical hierarchies
3.Dynamical hierarchies development
4.state of the art in stat NLP
Hi all, I was interested in knowing what is the state of the art in statistical natural language processing. Could someone please give me some pointers/links to research groups that are at the fore front . Thank You, Alok
5.Unsupervised grammar inference - state of the art...
Dear friends, I new at the NLP area, right now, I making an state of the art related with the techniques to obtain a grammar from an text corpus, without labeling such text, and without any prior knowledge of such language (unsupervised learning). I found some proposals like GraSp, CLL, among others, but these are very old documents. re there anybody with experience on this area?, I will appreciate if somebody could suggest me recent strategies or research areas to begin my investigation... Thanks in advance Htor Cadavid
Users browsing this forum: No registered users and 91 guest