On 3/24/2014 10:56 AM, Steven Lord wrote: > "dpb" <firstname.lastname@example.org> wrote in message > news:email@example.com...
>> I'm presuming the ML JIT optimization can't tell the above copy >> doesn't need to happen, of course. Is the runtime code generation that >> smart, Steven? > > I really don't want to discuss the internal guts of the JIT for several > reasons, among them: it WILL change from release to release; it gets > really complicated really fast; and I hope that eventually/soon we get > to a point where MATLAB is good enough at doing what it does that you > guys _don't care_ (aside perhaps from curiosity) what we do internally > to execute the code. [Yeah, I know that last one'll never happen. I can > hope, can't I?] > > My suggestion: don't write your code in some awkward way tailored to the > current implementation of the underlying "guts" of MATLAB. Program your > code in a way that makes sense, and if it's not working as quickly or as > efficiently as you think let us know so we can improve MATLAB (or so we > can offer suggestions as to how to modify your code so it still makes > sense but also works more quickly or efficiently.)
I don't see either of these alternative as at all "awkward" nor as an unreasonable expectation of users to know whether some obvious optimizations are or are not within the capabilities of the Matlab code generator.
You chose to eliminate an 'if' clause on basically a performance issue it would seem whereas I figured the extra copy would likely be far more expensive than the test/branch would be for sizable arrays. Why should I have to guess whether I was right or wrong?
It's _always_ going to be true that problems will continue to grow in size/complexity such that compute time will always be a problem and having an understanding of what constructs are/are not totally outside the capabilities of an optimizer is a serious piece or knowledge for those writing such code. Hence, your idea of not needing such information going away is indeed a pipe dream for real reasons. Plus, there's the problem that every release requires more overhead for the same performance -- I can see it in spades on my older machine when got R2012b--it brought it essentially to its knees whereas R12 was perfectly well-enough performing that I was completely satisfied. I could never have used the latter release for real work on the existing system--I suffered thru to be able to be current for evaluation as I had promised I would and to play w/ some new features and to be reasonably current for cs-sm and the forum but real work just couldn't have happened.
So, while committing to something is one thing, commenting on the feasibility of a fairly sophisticated optimization of determining whether a given copy may or may not be able to be recognized by the optimizer would seem to be a reasonable piece of information to add to the community. If it is feasible it seems _HIGHLY_ unlikely to me that it would ever be removed for those cases where it is doable. I agree that _HOW_ it is done isn't of any real concern and could be of IP value to TMW.
Then again, I'm not TMW; just a (mostly former) user and present parttime evaluator/suggestor/contributor.