
Re: Warsaw Univ. course, was Re: Work on Basic Mathematica Stephen!
Posted:
Jun 1, 2013 6:06 AM


On 5/31/2013 12:19 AM, David Bailey wrote: > On 28/05/2013 08:49, Richard Fateman wrote: >> >> I do not see any connection here. It is possible to write a linked list >> in Mathematica and still use its evaluation. e.g. >> consider using node[element,RestOfList]. Instead of [A,B,C] use >> node[A,node[B,node[C,Nil]]]. A node here is like a lisp "cons" cell. >> In fact there are other computer algebra systems written in lisp that >> use lisp lists and have similar evaluation strategies to Mathematica. >> > > Yes you could, and you might find it instructive to do this, because you > would then see exactly why Stephen Wolfram chose to implement lists in a > different way!
I have no objection to a data type that might reasonably be called Vector, or perhaps Sequence, that implements a data structure that uses sequential storage. I object to calling it List. > > Try using your implementation to implement some 1000 x 1000 matrix algebra!
I would not particularly advise using linked lists for a dense matrix, but linked lists for small matrices might be just fine, or sparse matrices of large dimension but few entries. In the context of SYMBOLIC matrices, one generally ends up using smallish sizes because of explosive growth in costs for doing nontrivial operations with them. In a numeric context, 1000x1000 or much much larger is more likely.
> > OK  perhaps you wouldn't want to use your new list implementation for > linear algebra, but in that case, where would you want to use it?
You think that linked lists are of no use? If you have access to a computer science text on data structures, I suggest you look. Or use Google for ... applications linked lists.
> Lisp > lists are expensive to use unless the algorithm is tailored to generate > lists in reverse order, and consume them in forward order (OK you can > reverse a Lisp list, but that consumes space for a copy, but random > access is still inefficient).
This is a silly argument. If you know in advance that the size of a vector is n then you can allocate space for it in Mathematica or in Lisp as a vector. If you don't know the size, adding one more element beyond the allocated space n requires O(n) time for copying. For a linked list it is O(1) to add an element. This has consequences for many applications including building queues, algebraic expression trees, storage management etc. Perhaps you are familiar with the related notion of a pointer, in the C language.
It is quite possible to write a lisp program that appends to the end of a list in time O(1). If you have access to information on lisp, look up nconc. It is, however a "destructive" operation, and not my favorite way of building a list. If I want to reverse a list that I know is not shared (i.e. I just constructed it!) I can use nreverse which is quite fast, and reverses a list IN PLACE. Accessing elements in a linked list of just a few elements (say, 8 or 10?) may be faster than accessing elements in a vector, where array indexes may have to be translated into byte addresses, checked against array bounds, before retrieving. Note that once a linked list is stored in memory cache, accesses can be quite fast.
If a lisp programmer wants to use an array, that is available in the language. If a lisp programmer wants to use a linked list, that is available too. If a Mathematica programmer wants to use a linked list, there is the (admittedly clumsy) construction I suggested previously, but basically it not an advertised option.
If you want to look for ways in which lisp can be expensive, I suggest you learn more about it, for example understanding boxed/unboxed numbers.
The idea that someone might write a slow program in a language is not an indictment of the language if a slightly more experienced or skilled or knowledgeable person could write an equivalent computation that runs very fast. It would be nice if even naive programs written by naive programmers were magically "optimized" into better programs. This is a challenge in the design and implementation of languages. E.g. "automatic parallelization" ...
My criticisms of Mathematica's language (EF) are based on situations in which the language itself interferes with the writing of efficient or mathematically consistent or even numerically accurate programs. And it misuses names...
> > Since the Lisp language was designed very early on, I would imagine its > list structure reflects the tiny memory spaces available at that time. > Garbage collection of varying sized objects is more complex and tends to > need some slack memory.
Lisps on modern computers tend to have pointers that are close to the normal size of words in the architecture. That is 32 or 64 bits. Usually a few bits in a pointer are treated as tags. But on the other hand there is usually no need to address individual bytes  if you address only words, you can mask off some of those bits anyway. > > The decision to implement lists the Mathematica way, has been amply > vindicated in recent years.
Um, I don't care that vectors work OK in Mathematica. They work OK in Lisp too. In fact they provide a very efficient mechanism if you declare them to be vectors of elements with a particular type. Then they can be unboxed (which you could read about). If you have a vector of elements of arbitrary stuff, you generally store pointers, just as you have in Mathematica. Lisp declared vectors or arrays appear to correspond to a generalization of PackedArray in Mathematica. This notion in Lisp goes back quite a way, at least 1984.
I don't know of any reason to say that the use of the name List has been vindicated. In seems to be potentially a source of confusion to anyone who knows (say) programming in C, Lisp, ...
> Because the mechanisms supplied to access > lists are not skewed towards sequential access, it has been possible to > produce packed arrays (and indeed sparse arrays) that work as drop in > replacements for the equivalent list structures  with enormous > performance gains.
Um, if you want to argue that there are advantages to packed arrays or sparse arrays, you will have to find someone else to argue with. I agree. > > Mathematica also gains from the decision to make lists mutable  again > this simplifies the design of algorithms and increases performance.
I have no idea what you are arguing about. Lists in lisp are mutable. e.g. (setf x '(a b c)) (setf (second x) 'q) > x is the list (a q c)
"Lists" in Mathematica may not be. e.g. f[n_]:=(Print[n];Hold[f[n]]); q=Table[f[i],{i,1,3}] q[2]= 2 FAILS. ReleaseHold[q[2]] reevaluates not just q[2], but every element in q.
It prints 1,2,3
> > Clearly Mathematica lists also score in they must help internally to > avoid repeated evaluation.
Just as arrays help? Oh, I forgot. They are the same as arrays.
> Clearly Mathematica must store some > information in the head of every object to keep track of objects that > need reevaluation, and you wouldn't want each list node to be bloated > out in that way.
It actually works the other way. If some element of a "List" is altered, then every element in that array must be reevaluated. So it is a big disadvantage. See the example above, with q[2].
If you wish to implement a "reevaluate if it might matter" strategy, you can do so by noting the dependency of variables on symbols. A linked list cell is not a variable or a symbol. At least 2 other computer algebra systems use such a strategy (one optionally). > > Yes you could implement Lispstyle lists in Mathematica, and with a bit > more effort, I showed how to do it. you could also supply them with a nice syntax for > input/output, Surely. so try it and see how many people find them useful!
The point I was making regarding "Lists" is that Wolfram is using a word for something quite different from the list data structure. This is one among a number of reasons to not use Mathematica for computer science.
Linked lists are used all the time in Mathematica. g[h[x]] is a linked list.
By the way f[x] or Sin[x] looks sort of like a "function" but with square brackets. But it is not really. It has something to do with rules, patterns, evaluation, etc. As I recall, in SMP, Wolfram's prior CAS before Mathematica, such things were called Projections. This was probably too confusing, In any case the term was dropped. Maybe the people who knew other meanings for projection were able to get to him (e.g. graphics people?) in a way that computer scientists could not.
http://www.stephenwolfram.com/publications/articles/computing/81smp/1/text.html
RJF

