grale.images

This module contains several classes that are related to the images in a gravitational lensing system.

The ImagesData class can be used to store properties of point-like or extended images, and is used to pass data to a lens inversion algorithm.

The LensPlane class calculates the deflection angles in the lens plane of a specific gravitational lens. This mapping can subsequently be used to calculate an image plane, represented by the ImagePlane class. Such an image plane is defined for a specific source, and hence requires you to specify the angular diameter distance of the source.

Once calculated, it can be used to obtain the critical lines and caustics for the lens and source distance involved. The image plane instance can also be used to render what the images of a specific source shape look like. The source shapes are derived from the class SourceImage and can be

Functions

grale.images.centerOnPosition(position, centerRaDec)

Recalculates position as relative to centerRaDec

grale.images.uncenterOnPosition(position, centerRaDec)

Performs the opposite calculation of centerOnPosition.

grale.images.readInputImagesFile(inputData, isPointImagesFile, lineAnalyzer='default', centerOn=[0, 0])

This function can process a text file (or previously read text data) into one or more ImagesData instances.

If the function you pass in lineAnalyzer returns a source identifier and image identifier, these will be used to group images per source. Otherwise, the behaviour is different depending on the value of isPointImagesFile. If True, then each single line is interpreted as a single point image, and a blank line groups the point images per source. If isPointImagesFile is False, then a single blank line separates the points that belong to different images, and a double blank line separates the images from different sources.

Arguments:
  • inputData: this can be an open file object, a filename or just the text data that’s already read from a file. Lines that start with a # sign are considered to be comments, and are ignored completely.

  • isPointImagesFile: as explained above, this flag indicates how input is treated when no source and image identifiers are specified in the lineAnalyzer function.

  • lineAnalyzer: here you can pass a function that interprets a single (non-empty) line in the input data. As default, the first column will be interpreted as the x-coordinate, the second as the y-coordinate, both expressed in arcseconds. The third column, if present will be used as the redshift. A different default can be set using setDefaultLineAnalyzer().

    If you provide the function yourself, it should return a dictionary with these entries:

    • x: the x-coordinate of the point, converted to radians (the pre-defined constants can be useful here). The helper functions degreesMinutesSecondsToDegrees() and hoursMinutesSecondsToDegrees() can also be of assistance.

    • y: the y-coordinate of the point, converted to radians (the pre-defined constants can be useful here) The helper function degreesMinutesSecondsToDegrees() can also be of assistance.

    • (optionally) srcnr: the source identifier of this point.

    • (optionally) imgnr: the image identifier of this point.

    • (optionally) z: the redshift for the source. Note that if this is specified for more than one image in a source, they should all have exactly the same redshift.

    • (optionally): group: if certain points in different images belong together, this can be specified by having the same group identifier.

    • (optionally): timedelay: if timedelay information is known for a point, it can be passed along this way.

  • centerOn: if present, all coordinates will be recentered on this point, using the ImagesData.centerOnPosition() function.

The return value of this function is a list of dictionaries with entries:

  • imgdata: the ImagesData object for a specific source

  • (optionally) z: if it was present in the input, the redshift for the source is stored.

  • (optionally) srcnr: if a specific source identifier was specified in the file, and passed along in the lineAnalyzer function, it is stored here.

Examples: suppose that each line contains entries like the one below (from A free-form lensing grid solution for A1689 with new multiple images)

# i  ID   B05  REF  RAJ2000(h:m:s)  DECJ2000(d:m:s)  z     Delta_beta
1    1.1  1.1  B05  13:11:26.257    -1:19:58.753     3.04  1.03
2    1.2  1.2  B05  13:11:26.088    -1:20:02.261     3.04  0.73
3    1.3  1.3  B05  13:11:29.584    -1:21:09.475     3.04  2.50
...

Then, you could use the following function as the lineAnalyzer parameter:

def la(l):
    p = l.split()
    if len(p) != 8:
        raise Exception("Expecting 8 input fields in line '{}'".format(l))

    info = { }
    info["srcnr"], info["imgnr"] = map(int, p[1].split("."))
    info["z"] = p[6]
    info["x"] = hoursMinutesSecondsToDegrees(p[4])*ANGLE_DEGREE
    info["y"] = degreesMinutesSecondsToDegrees(p[5])*ANGLE_DEGREE

    return info

If we’d like to center the coordinates on that first point, we could calculate

centerRa = hoursMinutesSecondsToDegrees("13:11:26.257")*ANGLE_DEGREE
centerDec = degreesMinutesSecondsToDegrees("-1:19:58.753")*ANGLE_DEGREE

and pass [centerRa, centerDec] as the centerOn parameter.

As another example, in Dark matter dynamics in Abell 3827: new data consistent with standard Cold Dark Matter you’ll find coordinates like the ones below:

# Name  RA         Dec
Ao1     330.47479  -59.94358
Ao2     330.46649  −59.94665
...

The ‘A’ is a label for the source, and as all points refer to the same image, is present for each point. The second character, ‘o’ in this case, refers to a feature in an image. The third part, ‘1’ and ‘2’ above, indicates the image the point is a part of. We can treat this input in two ways: the image number goes from 1 to 7, so we can process the input in the following way to create seven images, where each point is part of a specific group:

def la(l):
    p = l.split()
    if len(p) != 3:
        raise Exception("Expecting 3 input fields in '{}'".format(l))

    info = { }
    info["srcnr"] = 'A'
    info["imgnr"] = p[0][2]
    info["group"] = p[0][1]
    info["x"] = float(p[1])*ANGLE_DEGREE
    info["y"] = float(p[2])*ANGLE_DEGREE

    return info

Alternatively, we can treat each feature, each set of corresponding points (marked with ‘o’ for example), as a different point source. This is wat the following lineAnalyzer function would do:

def la(l):
    p = l.split()
    if len(p) != 3:
        raise Exception("Expecting 3 input fields in '{}'".format(l))

    info = { }
    info["srcnr"] = p[0][1]
    info["imgnr"] = p[0][2]
    info["x"] = float(p[1])*ANGLE_DEGREE
    info["y"] = float(p[2])*ANGLE_DEGREE

    return info
grale.images.getDefaultLineAnalyzer()

Returns the current default line analyzer, used when the lineAnalyzer parameter in readInputImagesFile() is set to "default".

grale.images.setDefaultLineAnalyzer(x)

Sets the current default line analyzer to x, which will be used when the lineAnalyzer parameter in readInputImagesFile() is set to "default".

grale.images.hoursMinutesSecondsToDegrees(s)

Converts a string or array of three parts, specifying hours, minutes and seconds, into a single floating point number that corresponds to a number of degrees.

As an example, passing "01:23:45", "01 23 45", ["01","23","45"] and [1,23,45] will all produce the same output of 20.9375.

grale.images.degreesMinutesSecondsToDegrees(s)

Converts a string or array of three parts, specifying degrees, minutes and seconds, into a single floating point number that corresponds to a number of degrees.

As an example, passing "-1:23:45", "-1 23 45", ["-1","23","45"] and [-1,23,45] will all produce the same output of -1.39583.

grale.images.enlargePolygon(points, offset, simplifyScale=0.02)

For a given set of points that describe a polygon, enlarge this by adding a specified offset. The Shapely library is used to accomplish this.

Arguments:
  • points: this list of points describes the polygon; the last point should be the same as the first point.

  • offset: if this value is positive, it describes a distance that’s added to the sides of the polygon; if it is negative the absolte value is interpreted as a fraction and the distance that’s added is calculated as this fraction times the scale of the polygon.

  • simplifyScale: if greate than zero, the newly obtained polygon is simplified, and this describes a tolerance below which points can be removed. It is specified as a fraction of the scale of the polygon.

grale.images.createGridTriangles(bottomLeft, topRight, numX, numY, holes=None, enlargeHoleOffset=None, simplifyScale=0.02, triangleExe='triangle', checkOverlap=True)

Creates a grid of triangles, out of which some holes may be cut. This grid can then be used as a null space grid in lens inversions. When such holes are cut out, it is usually a good idea to make them somewhat larger than the images themselves. Even if no enlargement of the holes is required, it must still be specified.

The Triangle program is used to create the triangulation.

The result is returned as an ImagesData instance.

Arguments:
  • bottomLeft: bottom-left corner of the triangulated region.

  • topRight: top-right corner of the triangulated region.

  • numX: number of points in the X-direction.

  • numY: number of points in the Y-direction.

  • holes: a list of holes that should be cut out of the triangulation. If an entry of this list is an ImagesData), or as a fall-back based on the convex hull of the image (using ImagesData.getConvexHull). If it is not an ImagesData instance, it is assumed to be a list of points describing a polygon, where the first point in the list must equal the last point.

  • enlargeHoleOffset: specifies the value by which the holes need to be enlarged. For each hole, this is passed as the offset argument of the enlargeHoleOffset() function. If holes are present, it _must_ be specified, so if you do not want to enlarge the holes you should set this to 0.

  • simplifyScale: this is passed to the enlargeHoleOffset() function which is used to enlarge the specified holes.

  • triangleExe: the executable for the Triangle program

grale.images.createPointImagesData(thetas)

This creates an ImagesData instance, with one point per image, according to the entries in the thetas list of positions.

grale.images.createSourceFromImagesData(imgDat, idx=-1)

For the points in the ImagesData instance, a source shape will be created. This is useful after obtaining backprojected images using e.g. InversionWorkSpace.backProject, to get an estimate of the source shape, and re-calculate the image positions. If only one point is available, a PointSource instance will be created, otherwise a PolygonSource is used. If idx is negative, all points are used, otherwise a specific image is selected.

ImagesData

class grale.images.ImagesData

This class is used as a container for the images of a single source. It’s possible to specify a number of images, several points within each image, and a triangulation of the points of an image. You can also add intensity information, time delay information and data about the shear.

__init__(numImages, **kwargs)

Parameters:

  • numImages: the number of images this instance will contain. Can still be increased using the addImage() member function.

  • kwargs can specify the presence of a number of properties, which can be one or more of intensity, shear, shear1, shear2, shearweight, distancefraction, shearsigma, shearsigma1, shearsigma2, redshift, redshiftsigma. E.g to add intensity information to each point, pass intensity=True as an argument. Property names that are set to False are simply ignored.

In addition to these properties, for backward compatibility the following arguments are recognized as well:

  • intensities: is the same as intensity

  • shearInfo: sets shearsigma1, shearsigma2 and shearweight to True

addGroup()

To specify which image points in different images correspond to each other, you can use an image group. This function creats a new group, and returns the identifier for the new group.

addGroupPoint(self, groupNumber, imageIndex, pointIndex)

Add the specified point to a certain point group.

Parameters:

  • groupNumber: the ID of the group to add the point to

  • imageIndex: specifies the ID of the image the point refers to

  • pointIndex: the ID of the point itself

addImage()

Add an image, and return the index that needs to be used to refer to this image.

addPoint(imageNum, position, **kwargs)

In the image with index imageNum, add a point at the specified 2D position, with properties specified by kwargs. For every point, the same properties must be specified as when the ImagesData instance was constructed.

For example, if intensity was specified as a property, you could call:

newPt = imgDat.addPoint(0, [1,2], intensity=345)

Or, if shear and shearweight were specified:

newPt = imgDat.addPoint(0, [1,2], shear=[3,4], shearweight=5)
addTimeDelayInfo(imageIndex, pointIndex, timeDelay)

Specifies that the time delay timeDelay should be associated with the point with image ID imageIndex and point ID pointIndex.

addTriangle(imageNumber, index1, index2, index3)

This function allows you to customize the triangulation of an image. It says that a triangle should be defined within the image with ID imageNumber, and that its three points are specified by point ID index1, index2 and index3.

centerOnPosition(ra, dec)

Assuming that all image point coordinates were added using right ascention as the first coordinate and declination as the second, this function recalculates all coordinates so that they are now specified relative to ra and dec.

clearTriangulation()

Clears all triangulations in this images data instance.

static fromBytes(b)

This function attempts to interpret the bytes b an images data set, and returns the new instance if successful.

getAllImagePoints(properties='all')

A convenience function that calls calls getImagePoints for each image, returning a list of the lists of dictionaries described in that function call. This means that the main list returned will be indexed by image number and each entry of the main list will itself be a list of points in an image.

The properties flag is simply passed on to getImagePoints, allowing you to select which properties to return.

getBorder(image, asIndices=False)

For the image with ID image, return the border, based on the triangulation that’s stored for this image. By default, a list of coordinates is returned, but if asIndices is True, then the point indices are returned instead. If successful, the first and last point in the list will be the same.

getBottomLeftCorner()

Finds out what the rectangular area surrounded by all image points is, and returns the bottom left corner.

getConvexHull(image, asIndices=False)

Returns the convex hull of the points for image ID image. By default, a list of coordinates is returned, but if asIndices is True, then the point indices are returned instead. If successful, the first and last point in the list will be the same.

getGroupPointIndices(group, pointnr)

For group with ID group and point ID pointnr within this group, return the (img, point) tuple containing the image ID img and point ID point that the group point refers to.

getImagePointIntensity(image, point)

Returns the intensity information stored for the point with ID point inside the image with ID image.

getImagePointPosition(image, point)

Returns the position of the point with ID point in image with ID image.

getImagePointProperty(propertyName, image, point)

For the point with specified index point, in the image with specified index image, return the property value for propertyName.

getImagePoints(image, properties='all')

Returns an iterator that can be used to obtain all points inside the image with ID image. For each point, an dictionary will be provided with a position entry, specifying the 2D position of the point, as well as entries for the properties assigned to the point.

By default, all known properties are listed, but properties can also be set to a different list of property names.

getKnownPropertyNames(reduce=False)

For this ImagesData instance, return the property names that are are recognized. Some property names refer to components of a 2D property. For example, the shear property enables shear1, shear2 as well as just shear. In case reduce is set to True, only shear would be returned and not shear1 or shear2.

getNumberOfGroupPoints(group)

Returnes the number of points in the group with ID group. If Ngp is returned, valid group point indices range from 0 to Ngp-1.

getNumberOfGroups()

Returns the number of point groups stored in this object. A point group is a set of corresponding points in several images. If Ng is returned, valid point group IDs range from 0 to Ng-1.

getNumberOfImagePoints(i)

Returns the number of image points are stored for the image with ID i. If this function returns Np, then valid point IDs will range from 0 to Np-1.

getNumberOfImages()

Returns the number of images that are contained in this images data set. If this function returns Ni, valid image IDs will range from 0 to Ni-1.

getNumberOfTimeDelays()

Returns the number of time delays stored in this instance. If Nt is the value returned, valid time delay IDs range from 0 to Nt-1.

getShearComponent1(image, point)

Returns the first shear component stored for the point with ID point inside the image with ID image.

getShearComponent2(image, point)

Returns the second shear component stored for the point with ID point inside the image with ID image.

getShearComponents(image, point)

Returns the both shear component stored for the point with ID point inside the image with ID image.

getShearWeight(image, point)

Returns the shear weight stored for the point with ID point inside the image with ID image.

getTimeDelay(index)

For the time delay with ID index, return the (img, pointnr, delay) tuple that describes the point and the time delay associated with the point. The point is specified by giving the image ID img and point ID pointnr within that image.

getTopRightCorner()

Finds out what the rectangular area surrounded by all image points is, and returns the top right corner.

getTriangles(image)

Returns the triangles for the triangulation that’s stored for the image with ID image, as a list of tuples, each containing three point indices within this image.

hasIntensities()

Returns a boolean indicating if intensity information is present.

hasProperty(propertyName)

Returns a flag indicating if the property propertyName is available in this ImagesData instance.

hasShearInfo()

Returns a boolean indicating if shear information is present.

hasTimeDelays()

Returns a boolean indicating if time delay information is present.

hasTriangulation()

Returns a boolean indicating if this instance contains a triangulation of points within one or more images.

static load(fileName)

This function attempts to interpret the file with name fileName as an images data set, and returns the loaded instance if successful.

save(fileName)

Saves the current images data set to the file with name fileName.

setImagePointPosition(image, point, position)

Changes the stored position for the point with ID point and image with ID image, to the coordinates in pos.

subtractIntensity()

For all intensities stored in this images data instance, the value v will be subtracted.

toBytes()

Returns a binary representation of the current images data set.”

uncenterOnPosition(ra, dec)

Performs the inverse operation of centerOnPosition

exception grale.images.ImagesDataException

This exception is raised if something goes wrong in the ImagesData class.

LensPlane & ImagePlane

class grale.images.LensPlane
__init__(lens, bottomLeft, topRight, numX, numY, renderer='default', feedbackObject='default')

This creates a LensPlane instance that covers the area specified by the bottomLeft and topRight corners. Such a LensPlane instance calculates and stores the deflection angles for the grale.lenses.GravitationalLens instance lens on a grid of numX points wide by numY points high.

If renderer is None, the mapping is calculated single threaded, within this Python process. Other renderers can be specified as well, for example to calculate the mapping faster using multiple cores with the MPI renderer. See the grale.renderers module for more information.

Feedback while rendering can be provided by specifying a feedbackObject parameter. See the grale.feedback module for more information about allowed values.

createDeflectionGridLens()

This creates a grale.lenses.GravitationalLens instance that uses the calculated deflections. In between the grid points, the values are interpolated. Outside the specified region, the lens effect will not be correct.

static fromBytes(b)

This function attempts to interpret the bytes b a lens plane instance, and returns the new instance if successful.

getAlphaVectorDerivatives()

Returns a dictionary similar to getAlphas() but now with names alpha_xx, alpha_yy and alpha_xy, describing respectively the derivative of the X-component of the deflection angle in the X-direction, the derivative of the Y-component of the deflection angle in the Y direction and the derivative of the X-component of the deflection angle in the Y-direction.

getAlphas()

Returns a dictionary containing entries alpha_x and alpha_y, containing the deflections stored in the lensplane. Using the names returned by getRenderInfo(), each of these entries is a NumPy grid of shape (ypoints, xpoints), containing the X and Y components of the deflections. The [0,0] entry of a grid corresponds to the value at bottomleft, the [ypoints-1, xpoints-1] entry corresponds to topright.

getLens()

This returns a copy of the grale.lenses.GravitationalLens instance that was used to create the deflections.

getRenderInfo()

Returns a dictionary with the following entries:

  • bottomleft: the bottom-left corner that was specified in the constructor of this instance

  • topright: the top-right corner that was specified in the constructor of this instance

  • xpoints: the number of points in the x-direction, between the left and right coordinates specified by bottomleft and topright, at which the deflection field was sampled

  • ypoints: the number of points in the y-direction, between the bottom and top coordinates specified by bottomleft and topright, at which the deflection field was sampled

static load(fileName)

Attempts to load a LensPlane instance from the file called fileName. If successful, this returns the newly loaded instance.

save(fileName)

Store this instance in a file called fileName.

toBytes()

Returns a binary representation of the current images data set.”

exception grale.images.LensPlaneException

This exception is raised if something goes wrong in the LensPlane class.

class grale.images.ImagePlane
__init__(lensplane, Ds, Dds)

Based on the LensPlane instance in lensplane, which contains the deflection field at a number of grid points, an ImagePlane instance is created for a certain source plane. This source plane has angular diameter distance Ds relative to the observer, and Dds relative to the lens itself.

getCaustics()

This returns a list describing the caustics associated with this image plane. Each entry in the list is itself a list of 2D points, describing a connected part of a caustic.

getCriticalLines()

This returns a list describing the critical lines associated with this image plane. Each entry in the list is itself a list of 2D points, describing a connected part of a critical line.

getDds()

Returns the Dds parameter that was specified in the constructor.

getDs()

Returns the Ds parameter that was specified in the constructor.

getRenderInfo()

Returns a dictionary with the following entries:

  • bottomleft: the bottom-left corner that is relevant for this instance. This is taken from the LensPlane instance specified in the constructor.

  • topright: the top-right corner that is relevant for this instance. This is taken from the LensPlane instance specified in the constructor.

  • xpoints: the number of points in the x-direction, between the left and right coordinates specified by bottomleft and topright, at which the deflection field was sampled. This is taken from the LensPlane instance specified in the constructor.

  • ypoints: the number of points in the y-direction, between the bottom and top coordinates specified by bottomleft and topright, at which the deflection field was sampled. This is taken from the LensPlane instance specified in the constructor.

  • xpixels: based on the image plane to source plane mappings that are known at xpoints * ypoints grid points, a number of pixels can be defined that can contain light from the source plane, and these pixels will be used when rendering the image plane or the source plane with renderImages() or renderSources(). This xpixels value specifies the number of pixels in the x-direction, and is one less than xpoints.

  • ypixels: similar to xpixels, but for the y-direction.

renderImages(sourceList, plane=None, subSamples=9)

For the list of SourceImage derived classes in sourceList, this function calculates what the images look like based on the dimensions and number of pixels for this ImagePlane instance.

The function returns a 2D NumPy array containing ypixels rows, each of xpixels pixels wide (see also getRenderInfo()). If plane is specified, the results are stored in that 2D NumPy instance, which must have the same dimensions.

Each pixel is sub-sampled sqrt(subSamples) times in x- and y- direction, to be able to roughly approximate the integration that’s needed over the surface area of a pixel.

Note that a call to only imshow. will plot the 0,0 value in plane (the bottom-left value) as the top-left corner causing the result to appear mirrored in the y-direction (the y-axis will point down). A subsequent call to invert_yaxis might be useful.

renderSources(sourceList, plane=None, subSamples=9)

For the list of SourceImage derived classes in sourceList, this function calculates what the sources look like based on the dimensions and number of pixels for this ImagePlane instance. This is what the image plane would look like if the gravitational lens effect could be turned off.

The function returns a 2D NumPy array containing ypixels rows, each of xpixels pixels wide (see also getRenderInfo()). If plane is specified, the results are stored in that 2D NumPy instance, which must have the same dimensions.

Each pixel is sub-sampled sqrt(subSamples) times in x- and y- direction, to be able to roughly approximate the integration that’s needed over the surface area of a pixel.

Note that a call to only imshow. will plot the 0,0 value in plane (the bottom-left value) as the top-left corner causing the result to appear mirrored in the y-direction (the y-axis will point down). A subsequent call to invert_yaxis might be useful.

segment(plane, threshold=0.0)

For the image plane plane that was rendered using renderImages(), this function looks at all the pixels that have a value larger than threshold. These pixels are divided into regions that are coherent, and a list of these regions is returned. Each region is itself a list of 2D coordinates describing the centers of the pixels.

static static_segment(plane, bottomLeft, topRight, threshold=0.0)

Similar tp segment(), but with the bottom left and rop right corners set manually.

traceBeta(beta)

Estimates the image plane positions to which the source plane position beta corresponds. Returns a list of 2D points.

traceThetaApproximately()

Use the already calculated theta/beta mapping (image plane position to source plane positions), to estimate the mapping for theta vectors that have not been calculated exactly.

exception grale.images.ImagePlaneException

This exception is raised if something goes wrong in the ImagePlane class.

Source shapes

class grale.images.SourceImage

This is a base class for different shapes of sources.

It should not be instantiated directly, but only through one of the subclasses.

addToAngle(ang)

Adds the value ang to the rotation angle of this source. The value must be specified in degrees.

addToAngularPosition(p)

Adds a 2D vector to the position of this source in the source plane.

getAngle()

Returns the rotation angle for the source shape, specified in degrees.

getAngularPosition()

Returns the 2D position of this source in the source plane.

getIntensity(p)

Returns the intensity of the source at the specified angular position (in the source plane).

getMaximumRadius()

Returns the radius outside of which the source does not produce any light.”

setAngle(ang)

Sets the rotation angle of this source to ang, which must be specified in degrees.

setAngularPosition(p)

Sets the 2D position of the source in the source plane to some value.

class grale.images.CircularSource
__init__(position, angularRadius, brightnessScale=1.0, fade=False)

Creates a circular source shape with center at 2D location position, and with radius angularRadius. The central brightness of the source is specified by brightnessScale, and depending on fade the brightness will either stay constant within the circular region or will fade to the border.

getAngularRadius()

Returns the radius of the source in the source plane, as specified in the constructor.

getFade()

Returns a boolean indicating if the source brightness fades to zero towards the boundary of the circular region, or if it stays constant.

setAngularRadius(a)

Sets the radius of the source to a.

setFade(f)

Adjusts the fade parameter (see constructor) to f.

class grale.images.EllipticalSource
__init__(position, halfAxis, eccentricity, angle=0.0, brightnessScale=1.0, fade=False)

Creates an elliptical source shape at location position` in the source plane, with half long axis halfAxis and eccentricity described by eccentricity. The shape is rotated counter clockwise over an angle angle (in degrees). The central brightness of the source is specified by brightnessScale, and depending on fade the brightness will either stay constant within the circular region or will fade to the border.

getFade()

Returns a boolean indicating if the source brightness fades to zero towards the boundary of the circular region, or if it stays constant.

setFade(f)

Adjusts the fade parameter (see constructor) to f.

class grale.images.PolygonSource
__init__(position, polygonPoints, cbool calcHull, angle = 0, brightnessScale = 1.0)

Uses a polygon as a source shape, and places the first point at location position. The polygon can be specified by specifying a list of points in polygonPoints. In this case, the polygon is closed automatically, so the last point should not be the same as the first. For this usage the calcHull parameter should be False.

Alternatively, the polygonPoints can also list a number of points of which the convex hull needs to be calculated and used as the source shape. In this case, the calcHull parameter should be set to True.

The polygon shape is rotated counter clockwise over angle angle (in degrees), and a brightness scale of brightnessScale is used when rendering the source.

class grale.images.DiscreteSource
__init__(data, angularWidth, angularHeight, position, angle=0.0, brightnessScale=1.0)

Uses the 2D NumPy array data as a pixellated source. This image has width and height described by angularWidth and angularHeight, and the center of the source image is placed at location position. It is rotated counter clockwise over the angle in angle (in degrees), and the pixel values will be scaled by the specified brightness scale brightnessScale.

class grale.images.PointSource
__init__(position, brightnessScale=1.0)

Places a point source at location position, with the specified brightness scale.

exception grale.images.SourceImageException

This exception is raised if something goes wrong in one of the SourceImage derived classes.