Thanks for pointing me to the example on using hedge.partition, it has
been most helpful in my understanding. While playing around I noticed
a few issues in hedge. I don't think that any of them are causing me
any major problems but I guess they are still worth reporting:
* Firstly, when upgrading to the latest version of pytools I get the
line 30, in <module>
from pytools.obj_array import *
line 10, in <module>
decorator_module = __import__("decorator", level=0)
ImportError: No module named decorator
* Secondly an error in hedge.mesh.generator.make_1d_mesh, line 52:
It passes a keyword argument "element_tagger" to
make_conformal_mesh_ext, which expects a keyword argument
While I was looking this up I noticed that the documentation of the
parameters of make_conformal_mesh_ext does not match the parameters of
the function (e.g. volume_tags vs volume_tagger).
* Finally importing from hedge.partition also throws up the following
warning, I'm not sure if this matters at all:
UserWarning: make_conformal_mesh is deprecated. Use
>> I have finally found some time to come back to hedge and start looking
>> at implementing variable epsilon and mu in the Maxwell operator. So
>> far I have succeeded in creating a very rough modification to em.py to
>> allow the basic functionality (attached along with an modified example
>> to run it). However, I'm having a couple of issues:
>> - I'm unable to get the absorbing boundary conditions to work
>> correctly. As far as I can see they should not need to be
>> reformulated when varying epsilon and mu, since the internal value is
>> always used. The current code looks correct to me, but it causes a
>> TypeError exception and I'm not sure how to remedy this.
> This results from silliness in hedge's language that I've fixed a while
> ago. If possible, you should use 'operator(argument)' instead of
> 'operator * argument' in newly-written operator code. Otherwise, hedge
> has to make a guess about how to bind things like 'operator * arg1 /
> arg2', and in your case, it made a wrong guess. I introduced the 'op *
> arg' syntax to allow cutesy expressions like 'dot(nabla, arg)', but
> aside from these cases, I've come to the conclusion that the
> parenthesized form is much less ambiguous. I've attached a diff for your
> em.py that fixes this, along with a removal of tuple.count(), which is a
> Py2.6ism. Hedge is still meant to run fine on Python 2.4.
Thankyou for this, absorbing boundaries now work fine.
>> - I haven't been able to test a model where epsilon and mu have
>> different values throughout the structure. The
>> VariableVelocityStrongWaveOperator just sets velocity based on the
>> spatial coordinates. However using this approach it's hard to
>> guarantee that the mesh will coincide with the epsilon and mu
>> variation, and it would be much more convenient to use tags. I have
>> created examples structures in gmsh where I have tagged different
>> volumes, however it seems that the material coefficient functions only
>> have the "x" and "el" arguments, and neither of these carries tag
>> information. Is there a way to do this that I am missing?
> See Mesh.tag_to_elements["my_tag"]. The gmsh reader should be filling
> this with the tags read from the gmsh file.
I found this function and managed to get it working by creating
element lists(?) corresponding to each tagged region. However I found
that testing each element for membership of this list ("if el in
region1_list" etc) was extremely slow. I tried an alternative
approach by modifying each element to add epsilon and mu information
to it, then getting my TimeDependantGivenFunction to use this value.
This worked fine until I tried an MPI example, where it seemed that
this information did not survive the distribution of the mesh between
MPI nodes. Of course I am doing my initial testing without MPI but I
don't want to inadvertently break MPI/GPU compatibility.
>> - Is there any mechanism for asymmetric treatment or tagging of
>> boundaries? In particular, once I have variable material constants
>> working, the next thing I would like to implement is total
>> field/scattered field incident boundary conditions, which would
>> require different boundary flux operators in the total field and
>> scattered field regions.
> I'm not quite sure what you're getting at here--are you talking about an
> interior boundary between a total-field and a scattered-field region?
Yes, this is exactly what I mean. See "Jens Niegemann, Michael Konig,
Kai Stannigel, Kurt Busch, 'Higher-order time-domain methods for the
of nano-photonic systems', Photonics and Nanostructures – Fundamentals
and Applications 7 (2009) 2–11" for an example of this being
implemented (though they refer to Taflove's FDTD book for all the gory
details of the total field/scattered-field implementation). The
features in their DG code correspond to what I would be looking to
implement in HEDGE if possible.
> I do respect that you don't want to share your code right now. But a)
> there's no reason to feel bad about any of it--in particular, it's a
> work in progress and not a mess in any way, and b) if we had this
> discussion out in the open, others could potentially learn from it. Plus
> you're missing out on potentially useful feedback from others.
> Therefore I'd like to encourage you to involve the list in stuff like
> this--there's nothing to lose, really. They're nice people. :)
I have noticed the recent exchange of emails with Rafael on the list,
so as soon as I have something which actually works I will post it to
the list. I do have a couple more issues:
Firstly, in the "max_eigenvalue" function of the MaxwellOperator, it
seems that "discr" is an optional argument. However to turn the
epsilon and mu from TimeDependentGivenFunction into arrays which I can
multiply and find the maximum, wouldn't I need to interpolate them
over "discr"? Can I just rely on it being passed in?
Actually I was wondering if getting the user to pass a
TimeDependentGivenFunction is really the best way to go about this.
>From the user's point of view it would seem easiest to pass in a
dictionary of tag names and corresponding (epsilon, mu) values.
Secondly, and this isn't really a show stopper, just a nuisance, it
seems that the MPI version freezes randomly on the maxwell-2d example
when I use 4 or more CPUS (but strangely not 2 or 3). It just seems
to stop part way through the time stepping, and doesn't seem
repeatable. I'm using Ubuntu x64 9.10, boost 1.40.0, openmpi 1.3.2.
I would assume this is some kind of deadlock problem, though of course
the problem could be in MPI libraries rather than with HEDGE. I just
thought I'd ask if you are able to reproduce this problem, if not then
I will try switching to different MPI libraries.
VisIt used to accept XML VTK files written by my code, but starting with
version 2.0, this does not appear to be the case any more. Instead, I
get an error message:
VisIt could not read from the file
The generated error message was:
There was an error opening
database. It may be an invalid file. VisIt tried using the following
file format readers to open the file: VTK, Silo
The files represent an unstructured grid (and hence have a .vtu
extension). I've uploaded a sample file here:
(I've only gzipped it to keep the download small--the report refers to
the un-gzipped version.)
Like I said, VisIt 1.x had no problem with these files, and Paraview
seems to like them as well.
Any clues as to what might be wrong would be much appreciated.
Thanks in advance,