Hi Andreas,
>> I have finally found some time to come back to hedge and start looking
>> at implementing variable epsilon and mu in the Maxwell operator. So
>> far I have succeeded in creating a very rough modification to em.py to
>> allow the basic functionality (attached along with an modified example
>> to run it). However, I'm having a couple of issues:
>>
>> - I'm unable to get the absorbing boundary conditions to work
>> correctly. As far as I can see they should not need to be
>> reformulated when varying epsilon and mu, since the internal value is
>> always used. The current code looks correct to me, but it causes a
>> TypeError exception and I'm not sure how to remedy this.
>
> This results from silliness in hedge's language that I've fixed a while
> ago. If possible, you should use 'operator(argument)' instead of
> 'operator * argument' in newly-written operator code. Otherwise, hedge
> has to make a guess about how to bind things like 'operator * arg1 /
> arg2', and in your case, it made a wrong guess. I introduced the 'op *
> arg' syntax to allow cutesy expressions like 'dot(nabla, arg)', but
> aside from these cases, I've come to the conclusion that the
> parenthesized form is much less ambiguous. I've attached a diff for your
> em.py that fixes this, along with a removal of tuple.count(), which is a
> Py2.6ism. Hedge is still meant to run fine on Python 2.4.
>
Thankyou for this, absorbing boundaries now work fine.
>> - I haven't been able to test a model where epsilon and mu have
>> different values throughout the structure. The
>> VariableVelocityStrongWaveOperator just sets velocity based on the
>> spatial coordinates. However using this approach it's hard to
>> guarantee that the mesh will coincide with the epsilon and mu
>> variation, and it would be much more convenient to use tags. I have
>> created examples structures in gmsh where I have tagged different
>> volumes, however it seems that the material coefficient functions only
>> have the "x" and "el" arguments, and neither of these carries tag
>> information. Is there a way to do this that I am missing?
>
> See Mesh.tag_to_elements["my_tag"]. The gmsh reader should be filling
> this with the tags read from the gmsh file.
>
I found this function and managed to get it working by creating
element lists(?) corresponding to each tagged region. However I found
that testing each element for membership of this list ("if el in
region1_list" etc) was extremely slow. I tried an alternative
approach by modifying each element to add epsilon and mu information
to it, then getting my TimeDependantGivenFunction to use this value.
This worked fine until I tried an MPI example, where it seemed that
this information did not survive the distribution of the mesh between
MPI nodes. Of course I am doing my initial testing without MPI but I
don't want to inadvertently break MPI/GPU compatibility.
>> - Is there any mechanism for asymmetric treatment or tagging of
>> boundaries? In particular, once I have variable material constants
>> working, the next thing I would like to implement is total
>> field/scattered field incident boundary conditions, which would
>> require different boundary flux operators in the total field and
>> scattered field regions.
>
> I'm not quite sure what you're getting at here--are you talking about an
> interior boundary between a total-field and a scattered-field region?
Yes, this is exactly what I mean. See "Jens Niegemann, Michael Konig,
Kai Stannigel, Kurt Busch, 'Higher-order time-domain methods for the
analysis
of nano-photonic systems', Photonics and Nanostructures – Fundamentals
and Applications 7 (2009) 2–11" for an example of this being
implemented (though they refer to Taflove's FDTD book for all the gory
details of the total field/scattered-field implementation). The
features in their DG code correspond to what I would be looking to
implement in HEDGE if possible.
> I do respect that you don't want to share your code right now. But a)
> there's no reason to feel bad about any of it--in particular, it's a
> work in progress and not a mess in any way, and b) if we had this
> discussion out in the open, others could potentially learn from it. Plus
> you're missing out on potentially useful feedback from others.
>
> Therefore I'd like to encourage you to involve the list in stuff like
> this--there's nothing to lose, really. They're nice people. :)
I have noticed the recent exchange of emails with Rafael on the list,
so as soon as I have something which actually works I will post it to
the list. I do have a couple more issues:
Firstly, in the "max_eigenvalue" function of the MaxwellOperator, it
seems that "discr" is an optional argument. However to turn the
epsilon and mu from TimeDependentGivenFunction into arrays which I can
multiply and find the maximum, wouldn't I need to interpolate them
over "discr"? Can I just rely on it being passed in?
Actually I was wondering if getting the user to pass a
TimeDependentGivenFunction is really the best way to go about this.
>From the user's point of view it would seem easiest to pass in a
dictionary of tag names and corresponding (epsilon, mu) values.
Secondly, and this isn't really a show stopper, just a nuisance, it
seems that the MPI version freezes randomly on the maxwell-2d example
when I use 4 or more CPUS (but strangely not 2 or 3). It just seems
to stop part way through the time stepping, and doesn't seem
repeatable. I'm using Ubuntu x64 9.10, boost 1.40.0, openmpi 1.3.2.
I would assume this is some kind of deadlock problem, though of course
the problem could be in MPI libraries rather than with HEDGE. I just
thought I'd ask if you are able to reproduce this problem, if not then
I will try switching to different MPI libraries.
regards
David
On Wed, 26 May 2010 17:04:12 -0400, "Field, Scott" <scott_field(a)brown.edu> wrote:
> Sorry I've been so bothersome today. I was trying to update HEDGE, and when
> I try to build pylo I get
>
> /users/kloeckner/mach/x86_64/pool/lib/gcc/x86_64-unknown-linux-gnu/4.3.3/include-fixed/features.h:159:1:
> warning: this is the location of the previous definition
> src/wrapper/wrap_silo.cpp: In function
> 'int<unnamed>::silo_typenum_to_numpy_typenum(int)':
> src/wrapper/wrap_silo.cpp:479: error: 'DB_LONG_LONG' was not declared in
> this scope
> error: command 'gcc' failed with exit status 1
> ------------------------------------------------------------------------
> COMMAND develop ON /users/sfield/src/pylo FAILED!
> ------------------------------------------------------------------------
Oops. I fixed this in mid-April in pylo rev fb7582aab1f7..., which I
apparently forgot to push to the web. It's up there now.
Andreas