Similar Threads:
1.Debug Off: Explanation
2.Pictured output for newbie :-( needing explanation ;-)
Dear Forthers,
I'm struggling with pictured output:
15 0 <# #s #> type
should, as I understood, print '15'. It does in SwiftForth. But not in
Win32Forth (6.11.07) or in DX-Forth. Even though I'm practically a
beginner, I do know that 'if you know one Forth you know one Forth' ;-)
Is this an instance of this rule? Or what is my problem here?
This is just the simplest thing I cannot even get to run. What I wanted
to achieve is to convert seconds from the stack to hh:mm:ss, as
suggested on the web page of 'Starting Forth', which didn't work, so I
returned to something more fundamental ...
Cheers,
Christoph.
3.Nice historical explanation of when and why term "closure" came into use
Benjamin Franksen schrieb:
> Joachim Durchholz wrote:
>> A really silly case is "monadic I/O". That's just like characterizing
>> multiplication as "associative number combination" - it's true, and
>> captures one property of the whole mechanism, but certainly not the most
>> relevant aspect.
>
> I have to disagree. IMO it /is/ the most relevant aspect, as soon as you
> take the representation of 'primitive' I/O actions as an abstract data type
> for granted.
>
> Why? First note that the primitive action values would be (almost) useless
> without a way to sequence them together to form programs.
>
> The astonishing fact is that the monad combinators and the corresponding
> laws are /exactly/ what is required to precisely capture the essence of how
> sequencing works in any imperative programming language.
Sure. But I/O is more than just sequencing - and that isn't surprising:
if it were otherwise, then there would be no interesting monads beyond IO.
A better approximation would be to say "a monad is about stringing
together type-heterogenous things, and that stringing-together is
associative". I say "approximation" because I'm sure enough that
everything described in the definition above is a monad, but I'm not
sure that the definition covers everything that is a monad.
> Monadic I/O simply recognizes this fact and provides the basic
> sequencing operation not as a language construct, as in imperative
> languages, but as /functions/, namely the monadic 'bind' (>>=) and
> 'return' functions. This is of course possible only after you
> recognize actions as (first class) values.
>
> As to your "multiplication as "associative number combination": Sure,
> associativity does not capture the essence of what multiplication of
> numbers is. For instance, there is commutatitivity, neutral
> elements, distribution laws (referring to yet another function named
> "addition") and don't forget the interaction with ordering.
I couldn't have said that better.
> However, as soon as you take all these laws together you /can/
> precisely capture the essence of what "multiplication of numbers"
> really is.
Sure. Still, you wouldn't characterize "multiplication of numbers" as
"associative number combination", wouldn't you?
My hypothesis is that calling Haskell's ways of doing I/O "monadic I/O"
has the same kind of terminological mismatch, i.e. it names the concept
after just one of several important properties.
> Compare this to I/O and "monadicity": there is nothing missing here.
> Can you name /any/ additional law (maybe involving additional
> primitive combinators) which are necessary to capture how imperative
> programms are constructed from 'I/O actions'
That depends on what aspects of I/O you want to capture in the terminology.
> 'I/O actions' (aka 'statements')?
That's still very different things. Statements that don't involve IO can
usually be easily mapped to purely functional code. Sometimes it may be
helpful to apply the State monad, but even that is purely functional.
You have to step outside that and use the IO monad only if you deal with
stateful entities that exist outside the Haskell program. Oh, and
occasionally for debugging, but I'd like to keep *that* can of worms
closed for the moment :-)
Regards,
Jo
4.datum->syntax-object explanation
Hello,
I'm trying to understand syntax-case transformers and I have some problem
with datum->syntax-object. Let's consider the following example:
(define-syntax lambda-foo
(lambda (x)
(syntax-case x ()
((_ body ...)
(with-syntax ((arg (datum->syntax-object (syntax _) 'foo)))
(syntax (lambda (arg) body ...)))))))
this is an example of its application:
#;> ((lambda-foo (display foo) (newline)) 'test)
test
#;>
While I intuitively understand the big picture, I still miss the syntax
point... can you please comment what's happening after the (with-syntax...
piece by piece?
In particular, what does datum->syntax-object return as 'arg'? What is the
relationship between the two 'arg'?
Thank you so much!!! :-)
Bye!
--
Kirk
5.Y combinator (somebody, correct this explanation)
seanf wrote:
> Hi group
>
> I'm a Scheme newbie working through The Little Schemer. After much
> substitution by hand, I can see how the derivation works, and how Y
> produces recursive functions in specific simple cases like length and
> factorial.
>
> I'm still feel like I'm missing the 'aha!' experience. I can't bridge the
> gap between making recursive functions and being a fixed-point operator.
> Looking through old posts in this group, I found 'Programming Languages
> and Lambda Calculi' (Felleisen & Flatt) which has helped some. Now I can
> see that, for example, that a multiplier is a fixed point of a multiplier
> maker, and (the TLS example) that length is a fixed point of the length
> maker, mk-length:
>
> (lambda (length)
> (lambda (l)
> (cond
> ((null? l) 0)
> (else
> (add1 (length (cdr l)))))))
>
> But why is a fixed point of mk-length _necessarily_ length? Are fixed
> points unique?
>
> I'm also perplexed by a final throwaway comment in ch9 of TLS:
>
> "What is (Y Y) - Who knows but it works very hard."
>
> Now I can't stop thinking about (Y Y). I guess it's a fixed point of Y,
> so (Y (Y Y)) is (Y Y), as is (Y (Y (Y Y))) etc. Is there an
> important truth at the end of this, or should I just try to forget about
> it?
>
> I only started on Scheme because I wanted to understand Lisp. Now it
> looks like I need to learn lambda calculus to understand Scheme. But
> before that, I'll need a good grasp of algebra, ...
>
>
> Sean
>
Hi Sean,
I found from "The Why of Y by R.Gabriel" an reasonable explanation.
"Normally" your define recursive function by giving it a name
(define FACT
(lambda (n)...
(if (< n 2)
1
....
and use that name in function's body
(* n (FACT (- n 1 ))))))
The name (FACT) is global variable which is "Bad" (side-effect).
If you want to create recursive function without creating new global
function name you need Y:
a) modify your own function into lambda expression
b) give that expression as parameter to Y -function
and you have recursion with local parameters only.
Unless I have understood it completely wrong....
What has "fixed-point" property to do with this? My guess is that
it is needed so that we can be theoretically sure that ANY function
can be given as parameter to Y.
-pekka-
6. explanation of a syntax-case example
7. Simulation behaviour, explanation requested
8. using 64-bit integers on 32-bit machine [OT: further explanation]