# MEMIT

<figure><img src="https://3121926949-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2FK2Vdw69n5ipoU5KHLczg%2Fuploads%2FJv4rxHh3b65BUgskl4u8%2Fimage.png?alt=media&#x26;token=b534cf77-9fba-4ddc-a681-3b88e1aa08e6" alt=""><figcaption><p>Paper: MASS-EDITING MEMORY IN A TRANSFORMER</p></figcaption></figure>

#### execute\_memit() -> Dict\[str, Tuple\[torch.Tensor]]

> Execution function: Execute the MEMIT update algorithm for the specified update at the specified layers

```python
def execute_memit(
    model: AutoModelForCausalLM,
    tok: AutoTokenizer,
    request: List[Dict],
    hparams: MEMITHyperParams,
) -> Dict[str, Tuple[torch.Tensor]]:
```

* **Paramters**
  * model(<mark style="color:purple;">PreTrainedModel</mark>): model to be edited
  * tok(<mark style="color:purple;">PreTrainedTokenizer</mark>): tokenizer for inputs
  * requests(<mark style="color:purple;">List\[Dict]</mark>): The edit descriptors and targets.
  * hparams(<mark style="color:purple;">Hyperparams</mark>): hyperparameters for editing method
* **Return Type**
  * delta(<mark style="color:purple;">Dict</mark>\[<mark style="color:purple;">str</mark>, Tuple\[<mark style="color:purple;">torch.Tensor</mark>]]): new delta weights

#### apply\_memit\_to\_model()-> *PreTrainedModel*

> Main function: Given the request, it applies MEMIT to your model. Return the changed weights of the model.

```python
def apply_memit_to_model(
    self,
    model: AutoModelForCausalLM,
    tok: AutoTokenizer,
    requests: List[Dict],
    hparams: MEMITHyperParams,
    copy=False,
    return_orig_weights=False,
    keep_original_weight=False,
    **kwargs
):
```

* **Paramters**
  * model(<mark style="color:purple;">PreTrainedModel</mark>): model to be edited
  * tok(<mark style="color:purple;">PreTrainedTokenizer</mark>): tokenizer for inputs
  * requests(<mark style="color:purple;">List\[Dict]</mark>): The edit descriptors and targets.
  * hparams(<mark style="color:purple;">Hyperparams</mark>): hyperparameters for editing method
  * copy(<mark style="color:purple;">bool</mark>): whether to copy original model
  * return\_orig\_weights(<mark style="color:purple;">bool</mark>): whether to return the weights of original model
  * keep\_original\_weight(<mark style="color:purple;">bool</mark>): whether to edit sequentially
    * `False`: edit sequentially(because the original weight is not maintained after each edit)
    * `True`: not edit sequentially
* **Return Type**
  * edited\_model(<mark style="color:purple;">PreTrainedModel</mark>): model weights after editing

#### Example

<pre class="language-python"><code class="lang-python">// ...
hparams = MEMITHyperaParams.from_hparams("llama-7b.yaml")
editor = BaseEditor.from_hparams(hparams)
prompts = ['What university did Watts Humphrey attend?',
    'Which family does Ramalinaceae belong to',
    'What role does Denny Herzig play in football?'
<strong>]
</strong>target_new = ['University of Michigan',
    'Lamiinae',
    'winger'
]
metrics, edited_model, _ = editor.edit(
    prompts=prompts,
    target_new=target_new,
    keep_original_weight=True
)
</code></pre>
